Crab Nebula (M1) — supernova remnant imaged by Herschel and Hubble Space Telescopes

Category: Reports

Energy Science reports

Crab Nebula (M1), supernova remnant · ESA/Herschel/PACS; NASA, ESA & A. Loll/J. Hester (Arizona State Univ.) · NASA Image Library ↗

  • Www Energyscience Org Uk Reports Es5 Esr5

    ENERGY SCIENCE REPORT No. 5

    This Report was first published by the author in 1994 and was reissued later and made more generally available from Sabberton Publications as ISBN 0 085056 0217 in October 1996. It is now made available freely via this Internet facility. It concerns theory pertaining to the creation and properties of deuterons which, as present in atoms in heavy water, deuterium oxide, are involved in the experiments which gave birth to the notion of ‘cold fusion’. The technology of that field is slow to develop and, though the author did plan to write a Part II Report as a sequel to this report, which is entitled POWER FROM WATER: COLD FUSION: PART I, this has not materialized. This Report nevertheless is an important contribution to the theory of the subject, also because it explains how the triton, the third isotope of hydrogen is created. It is worthy of study as an adjunct to the author’s latest work, the book: The Physics of Creation, because the latter explains in updated detail how the proton itself, the primary isotope of hydrogen is created. For this reason it is given priority in updating this website by now adding progressively each of these ten Energy Science Reports as they are withdrawn from normal printed publication. It should be noted that the book just referenced is a substantial work and should not be confused with Appendix A of this Report, which has the same title. The latter featured as a 12 page Chapter 4 in the author’s book GRAVITATION, published in 1975, which gives an early insight into what has now become a 28 year-old account of the origin of proton creation. ………. Harold Aspden, 1 June 2003

    ENERGY SCIENCE REPORT NO. 5

    POWER FROM WATER: COLD FUSION: PART I

    © HAROLD ASPDEN, 1994
    Contents
    Introduction
    The Black Hole Syndrome
    The Triton Factor
    Conclusion
    APPENDIX A: ‘The Physics of Creation’
    APPENDIX B: Deuteron-Proton Transmutation
    APPENDIX C: Triton Lifetime Theory
    APPENDIX D: ‘The Theory of the Proton Constants’
    APPENDIX E: ‘The Neutron and the Deuteron’*
    * These papers are reproduced here by kind permission of the Editors of the Hadronic Journal
    POWER FROM WATER: COLD FUSION: PART I

    Introduction

    This Energy Science Report draws attention to the revelance of
    theoretical work pursued by the author over many years before the advent
    of the now well-known ‘cold fusion’ discoveries reported from Utah in
    1989.

    It will be followed by a Cold Fusion: Part II Report, which
    will be more specifically directed to the author’s patented technology
    which is emerging from this theoretical base.

    The object of this Report is to show how the ‘cold fusion’
    scenario is destined to impact the whole field of fundamental physics,
    ranging from cosmology generally to the pursuit of energy generation
    techniques that are so fundamental that they can harness the still-latent and ever-present forces which brought about the creation of the
    universe.

    These Energy Science Reports are all connected with that
    underlying groundwork in energy physics that the author has surveyed,
    driven by his interest in magnetism. Thus Energy Science Report No. 1
    concerned ‘Power from Magnetism’ and described three of the author’s
    experiments which point the way forward to what many term ‘free energy’.

    We are assuredly destined to see rapid strides in this technological
    field in the months and years ahead and we will enter the 21st century
    with a whole new vision of our energy future.

    Only today, 15th April, 1994, as the author writes the first words
    of this report, a communication was received which draws attention to
    what is termed ‘UDT’ – Unidirectional Transformer – which Paul Raymond
    Jensen of Santa Barbara, California claims to have invented. When
    readers of my Energy Science Report No. 1 become aware of Jensen’s
    ‘UDT’ and compare the transformer with that shown in Fig. 4 of that
    Report they will see how the solid-state ‘free energy’ ferromagnetic
    device can now emerge on the ‘free energy’ scene.

    With the same prospect evolving on the magnetic reluctance motor
    using permanent magnets, as championed, for example, by New Zealander
    Robert G. Adams, this author has planned an Energy Science Report
    concerned with motor technology. However, here also, whilst currently
    in the throes of experimentation, it has come to light that a researcher
    named Frank F. Potter has, for many years, been urging university
    professors in U.K. to work on the prospect of tapping the energy field
    that powers a magnet. He has challenged them to do the calculations
    on specific field coupling involving magnets to prove the case one way
    or the other.

    In spite of the interest engendered, the usual establishment reserve
    about the so-called ‘perpetual motion’ machine has kept the Potter
    issue private and not brought it into open forum. However, this author,
    having now heard of this, has responded to the challenge and has brought
    ahead of schedule ‘Energy Science Report No. 4: The Potter Debate’
    which was completed on 10th April, 1994. That Report provides a
    mathematical basis which will help critics of the ‘free energy’ field to
    come to terms with what is now bound to disturb the world of those
    experts who know how to design electrical transformers and chokes but
    appear not to know how close they are to a new technology that can
    provide an energy bonanza.

    The intervening Energy Science Reports Nos. 2 and 3 are captioned
    ‘Power from Ice’, and relate to experimental work on a thermoelectric
    energy converter in which the author is involved as inventor. These
    Reports exist only in confidential draft form at this time but that
    technology does spill over into something that will be said about the
    ‘cold fusion’ research, particularly in the Part II Report.

    This introduction, therefore, explains how this text fits into the
    series of Energy Science Reports by which the author has chosen to update
    his published research findings prior to incorporation and consolidation
    in a more formal book form. The ‘free energy’ scene is now evolving
    so rapidly that it is better if such a book is written once the author has
    possession of his own working ‘free energy’ generator and can provide
    full test data on a practical system.

    The Black Hole Syndrome

    This may seem an unusual heading for a text about ‘cold fusion’,
    but the physicists who believe in ‘black holes’ think as do physicists who
    do not believe in ‘cold fusion’. This is a very relevant comparison as
    one can see from the following remarks.

    1. All physics is built on observation of how electrical particles
    behave, whether individually or in aggregation as matter. The
    interaction forces between such particles control their coming
    together, whether to form atomic nuclei, molecules, composites,
    crystals or stars and planets.

    2. Physicists tend to extrapolate their knowledge of experimental
    behaviour to realms far beyond the bounds governing the conditions
    of their experiments. They seek to probe territory they cannot
    reach, but always build with confidence on the certain founding
    knowledge derived from experiments on what they can see or what
    they can explore in a laboratory.

    3. The neutron as a real particle has only been detected upon
    creation in the free state and it has a half-lifetime measured in
    minutes but physicists extrapolate and create ‘neutron stars’ in
    their imagination, stars which survive far longer than a few
    minutes! They cannot ‘see’ a neutron in an atom, such as in the
    deuteron, but they assume it is there because the deuteron has two
    atomic units of mass and one of charge. But, surely, one could
    better surmise that two anti-protons plus three positive beta-particles represent two atomic units of mass having one positive
    unit of charge. We know that atoms decay by shedding beta-particles and what we could suspect is that, if they shed an anti-proton and a positive beta-particle, so that would manifest
    itself as a short-lived ‘neutron’! If, on this basis, there are no
    neutrons in an atomic nucleus such as the deuteron, then is it
    really surprising to find no neutron emission when we contrive to
    capture those positive beta-particles by free conduction
    electrons in the host metal cathode of a cold fusion cell? On
    the contrary, physicists go the other way and make their
    unwarranted quantum leap by recognizing that neutrons are able
    to form stars that have no electrical resistance to the crushing
    force of gravitation – even though those free neutrons in the
    laboratory show a substantial negative magnetic moment!

    4. They cannot see a ‘black hole’, but they can imagine matter
    becoming so compact, as gravitational interaction forces become
    so strong as to out-weigh and preclude the intervention of
    electric forces between those charge constituents of the neutron.
    They are thereby assuming that gravitation is a force so
    fundamental that it transcends and displaces electric force from
    a primary role that is so evident in laboratory findings from
    atomic physics.

    5. Those physicists can see in certain remote galaxies certain effects
    which suggest the coming together of electrical matter, which, by
    all the basic rules of physics, should not occur because electric
    forces resist nucleation. Their evidence is a strong gravity
    force, abnormally related to the mass of the visible body acted
    upon or a substantial redshift in the optical spectrum of atoms
    radiating from the nearby field zone. Their assumption is that the
    universe was born in a Big Bang where everything was overheated
    and had such energy content that all physical barriers could be
    overcome. Anything is possible in such a vision!

    6. So, if excess heat is seen to emerge from a deuterated palladium
    cathode in a ‘cold fusion’ cell, that could suggest that
    ‘nucleation’ or fusion has occurred between deuterons, overcoming
    their mutual electric repulsion in that metal palladium.
    However, those who believe in ‘black holes’ are not inclined to
    believe in ‘cold fusion’ because they know that the ‘black-art’
    assumptions needed to create the ‘Big Bang’ and the ‘black hole’
    are not as easy to apply inside a lump of palladium on a
    laboratory test bench.

    7. This ‘disbelieving’ body of physicists has other ‘disbeliefs’ as
    well. They rely on their ‘practical’ knowledge of gravity and
    measurement of G to calculate ‘black hole’ properties, but do
    not believe that there is a real ‘aether’ medium, the distortion of
    which generates that gravitational action. They believe in
    mathematical extrapolation and that means reliance on equations
    do not ‘rupture’ when under excessive stress, as does real matter
    or real ‘aether’. What, indeed, is the tensile strength or the
    compressive strength or the shear strength of what physicists call
    4-space? What, one wonders, are its internal dimensions, the
    distances between its component parts? Without common ground on
    which to stand in talking about ‘aether’ or ‘space-time’, one
    cannot discuss with such physicists how it is that the ‘crystal’
    structure of the aether itself determines the ‘fine-structure
    constant’ they use in their atomic physics. One cannot discuss with
    such physicists how it is that the sub-quantum motion underlying the
    Planck action quantum interacts with matter present to force a
    need for a dynamic balance, which in turn demands the presence of
    a discrete and unseen graviton population. One cannot therefore
    get such physicists to listen to the formal account by which G,
    the constant of gravitation, is derived in terms of that dynamic
    balance. And so it follows that one cannot put the case to
    such physicists that, where matter is very concentrated, as within
    an atomic nucleus in the mid-range of the periodic table, the
    aether is not too far from the stress limits that govern graviton
    creation conforming with G as measured in our laboratory.

    8. It seems that there is no way in which one can lead a ‘disbelieving’
    physics community out of their wilderness, even if one uses their
    own language and words with which they are familiar. All one
    can do is to destroy their beliefs with the mighty blow forthcoming
    from the reality of a technological breakthrough. Whether this
    comes from ‘cold fusion’ or from ‘free energy’ sourced in
    ferromagnetism matters not, so long as there is that
    technological breakthrough. What is needed is something that
    points to evidence of how protons and deuterons are created
    preliminary to their fusion to form heavier forms of matter and
    how Planck’s action in the underlying aether spills out energy to
    feed that creation.

    9. The ‘black-hole’ supergravity that can occur in very dense
    matter cannot be explained until one can explain gravity in
    normal matter and until one can further explain the factors
    which determine the value of the fine-structure constant. If, for
    example, Planck’s constant were to change in a step function as
    a function of the mass density thresholds in very dense matter,
    related to the concentration of aether energy, so that would
    affect the interpretation of the so-called ‘black hole’ evidence.

    10. If, in striving to sustain a dynamic balance, the aether responds
    in a dual dynamic action to the passage of electromagnetic
    waves, so this could affect energy deployment implicit in
    Maxwell’s wave equations and it could explain why the aether
    medium appears non-dispersive. These possibilities are not even
    considered by physicists who insist on building only on their ‘past
    experience’ without looking at the foundations to see what might
    have been missed. So, we advance by the accident of discovery,
    and, it seems, ‘cold fusion’ is one such accident. It remains now
    to convince physicists generally that there is excess heat generated
    in a cold fusion cell and then they can begin to think of revising
    their theories. This they will do in their own way, mindless of the
    work of record that can help them in that task.

    11. Inasmuch as this author began in this field by first making the
    magnetic case for a real aether, then by determining the structure
    of that aether and deducing the fine-structure constant and going
    on from there to explain the connection with gravitation and
    proton creation, so it seems appropriate to lead from that into
    the subject of this Report. ‘Black holes’ and an ‘expanding
    universe’ conceived by physicists who were unaware of how Nature’s
    ongoing attempts at proton creation in space can progressively
    reduce the frequency of electromagnetic waves in transit, plus
    the illusions of Einstein, have made them deaf to what this author
    has been saying over the years. In spite of this the author will
    here try once again to introduce his theory of proton creation and
    with it the creation of neutrons and deuterons, all to give basis
    to the new physics essential to our understanding of what underlies
    ‘cold fusion’ and of that deeper source of ‘free energy’ from
    which protons and deuterons are created. The author will further
    show how gravity features in this act.

    12. One could not advance a theory on the scale provided by this
    author without encountering numerous obstructions where one has
    to pause to explain why others who claim something different have
    gone wrong in their own endeavours. The ‘cold fusion’ issue has
    run into such problems. However, here it has not been a question
    of theory. There is now too much theory and not enough fact and
    so it is that the author feels he can let his own theory pertaining
    to cold fusion stand the scrutiny of others in this contest before
    needing to consider its defence. No, the rival claims in the
    ‘cold fusion’ field are those of experimenters. Whilst there are
    the pioneers who persist experimentally in their onward research,
    there are the others who rely on their personal ‘experience’ of
    confirmatory tests which have failed. Thus, whilst the author
    makes no special claim for superior wisdom in this experimental
    field, he has a comment to offer on the latter topic. It is
    merely an observation that to get two like-polarity charges to
    come together in a metal conductor one needs a standing charge
    of opposite polarity set up in that metal. One way of creating
    this condition is by setting up a non-linear orthogonal
    configuration of the temperature gradient and the magnetic field
    in the metal. In attempting to use uniform temperature
    calorimeter test apparatus enclosing the cold fusion cell,
    researchers are choking off the possible catalyst temperature
    gradient that could well be needed to trigger deuteron fusion.
    That topic will be discussed in the Part II Report and, pending
    that, readers may see some mention of this in New Energy News:
    April 1994: ‘Patents for Cold Fusion’ pp. 3-5.

    ******

    It is hoped that the above discourse will explain why ‘cold
    fusion’ is seen by this author as offering more than a technological
    route to a non-polluting new source of energy. Nor is it merely
    something that can affect attitudes by nuclear physicists in their
    particular discipline. It is, in fact, a route to something of far
    greater consequence in that it gives us an insight into the true forces
    of Creation.

    It is appropriate here to remind the reader that ‘cold fusion’ is
    very much concerned with whether, and if so, how, hydrogen nuclei,
    adsorbed into a host metal and having their atomic electrons exposed to
    the interplay with free conduction electrons in that metal, can fuse
    together to release energy. The mutual transmutations and transient
    behaviour of the nuclei of the hydrogen isotopes, the proton, the
    deuteron and the triton, is what concerns us in finding the answer to
    these questions.

    The Triton Factor

    One is not far from claiming the ultimate scientific achievement
    when one declares an ability to calculate the proton mass
    theoretically in terms of electron mass, based on showing how Nature
    creates that proton.

    One should not then be surprised if the same theory explains other
    phenomena and leads to the precise derivation of other fundamental
    dimensionless physical constants, such as Planck’s constant and the
    gravitation constant G.

    Whilst the author has waited patiently for his work in this field to
    be appreciated and recognized by the world at large, to no avail so
    far, it has been personally satisfying to see how the same theory yields
    the solutions to lesser problems, such as those posed by the muon, the
    pion and the kaon or the neutron and the deuteron.

    The key interest on which this research was founded was that of
    understanding the electrodynamic properties of these particles and
    relating the quantum of action of a real aether with the
    electrodynamics of the gravitons which determine the force of
    gravitation.

    In the Appendices to this Report some of the relevant published
    papers are reproduced, so there is no point in discussing that work in the
    body of this text on ‘cold fusion’. However, not reproduced elsewhere
    is an account presented in a book entitled ‘GRAVITATION’ which the
    author published in 1975.

    The subject was that of showing how heavy electrons (the mu-mesons
    or muons) which account for the primary energy action in the aethereal
    vacuum medium come together to create particles from which evolve
    protons and gravitons. Their action in creating protons is fully
    disclosed in the paper reproduced in APPENDIX D. The paper in
    APPENDIX E deals with the neutron and the deuteron and particular
    reference is made to Table II in that paper which shows how a deuteron
    alternates between three states, one of which is electrically neutral
    with a transiently-free -particle, a state which makes it particularly
    vulnerable to fusion with another deuteron.

    Concerning gravitation, the author could further include ‘The
    Theory of the Gravitation Constant’, as published in Physics Essays, 2,
    pp. 173-179 (1989), but as that will be appended to ENERGY SCIENCE
    REPORT NO. 6, the reader is invited to refer to that. However, a
    summary introduction is presented below as APPENDIX A, reproduced
    from pp. 44 to 52 of the author’s 1975 book entitled ‘GRAVITATION’.
    It provides a pictorial scenario showing how particle building can
    occur to develop the proton into the graviton needed to explain the
    derivation of G, the constant of gravitation.

    From the viewpoint of ‘cold fusion’ this is relevant because one
    needs to be assured that a theory developed for the proton, the neutron
    and the deuteron is consistent with the physics needed to explain other
    phenomena and, as ‘black holes’ and gravitation have been mentioned, the
    link between protons and gravitation should be of interest. Knowledge
    of the graviton mass is essential if one is to calculate the value of
    G.

    The underlying theory was extremely simple in that the energy
    formula for two electric charges in contact is a quadratic equation
    having two solutions for the same energy value if one of the charge
    energies is a variable. This is because the energy of a charge e is
    inversely proportional to its bounding radius. Therefore, given two
    particle energy quanta, each nucleated by the standard unit of charge,
    one finds that a third particle form is created with no energy
    requirement. In an energy-active world, the separation and
    recombination of such particles and the ongoing regeneration of new
    particle forms amounts to a creation process. The question then arises
    as to which particle forms win in the contest for survival and it is
    found that only those having special secondary resonance properties can
    enjoy a long life span.

    In this contest for survival of particles, newly created by
    drawing on the pool of surplus energy, there are those which are created
    at very nearly the same mass by two different combination sequences.
    This gives them a dominant advantage but the only long term survivor
    in real matter is the proton.

    This means that the heavier particles of matter are formed by
    taking protons and/or antiprotons as basic building blocks and
    combining the -particle constituents, the electrons and positrons of
    the quantum-electrodynamic field background.

    The deuteron has to be an electron-positron-proton-antiproton
    composition of some kind, whereas the triton, the third isotope of
    hydrogen, can be of similar composition, but of more complex form.

    The reason for this is the fact that the basic graviton has a mass
    greater than twice the proton mass but not as great as three times the
    proton mass. Therefore, a closely-bound structure will constitute
    the deuteron, whereas the triton will need to have its mass seated in two
    end regions standing apart and not closely-bound by a -particle
    linkage.

    It was on these lines, that the theory of the deuteron and triton
    evolved, but the key to determining their actual structure was the
    evidence afforded by their precise mass and by their electrodynamic
    response properties as known from their magnetic moments. The same
    applied to the neutron, which, like the triton, had a third parameter to
    bring in as evidence, namely a measured lifetime.

    Such data, when deciphered, showed that the deuteron, for example,
    was exchanging states by particle and anti-particle annihilation and
    recreation and that in some states it had a satellite system or
    ‘entourage’ of ‘free’ -particles, meaning that they could migrate a
    very limited distance into a host metal containing such a deuteron.

    For the neutron the lifetime became calculable but, as the theory
    evolved to build into a model of nuclear chain linkage as atoms of
    greater atomic number formed, so the neutron could not be seen as part
    of the atomic nucleus. It only exists in a free condition where it has
    that limited lifetime.

    It is only very recently that the triton data has been deciphered
    and the theory has been proved very successful in interpreting the
    lifetime. Note that lifetimes are calculated on the basis of
    destructive bombardment by combinations of mu-mesons featuring in their
    quantum-electrodynamic dance in that aethereal field background.

    The work on the triton has followed closely on the discovery that
    the proton and deuteron have an abundance relationship that is set by
    their interaction in this aethereal background field, as deuterons are
    primed to undergo fission to recreate protons, whilst protons merge by
    fusion to create deuterons.

    The showing that the deuteron and the proton have a relative
    natural abundance that is determined by an ongoing physical process
    forms the subject of APPENDIX B, whereas the derivation of the
    lifetime of the triton is presented in APPENDIX C.

    It is noted that the author has written many other papers that
    connect with the above theory and five, in particular, warrant mention
    and are commended for library reference to interested readers as they
    will not be included in this initial series of the author’s ENERGY
    SCIENCE REPORTS.

    They are:

    (a) ‘Meson Production based on Thomson Energy Correlation’,

    Hadronic Journal, 9, 137-140 (1986).

    (b) ‘An Empricial Approach to Meson Energy Correlation’,

    Hadronic Journal, 9, 153-157 (1986).

    (c) ‘The Physics of the Missing Atoms: Technetium and Promethium’,

    Hadronic Journal, 10, 185-192 (1987).

    (d) ‘Conservative Hadron Interactions exemplified by the Creation of the Kaon’,

    Hadronic Journal, 12, 101-108 (1989).

    (e) ‘A Theory of Pion Creation’,

    Physics Essays, 2, 360-367 (1989).

    All these papers passed the test of referee scrutiny as did many
    papers giving groundwork for the above developments that were published
    in English by the Italian Institute of Physics in their Lettere Al Nuovo
    Cimento series in the five or so years before that periodical terminated
    publication at year-end 1985.

    There will be those who read this text who stand ready to criticize
    because there is so much in physics that can affect one’s views on
    particle behaviour. For example, the wave nature of the neutron is
    not something that may seem to fit easily into the picture presented
    above. However, in fact, it does, because that β-particle ‘entourage’
    already mentioned (see Table I in Appendix E) is what exhibits the wave
    property.

    The reader who is ready to discard the substance of this text on
    that account should first read the author’s paper ‘The Theoretical
    Nature of the Photon in a Lattice Vacuum’ to be found at pp. 345-359
    in ‘Quantum Uncertainties’ Series B: Physics Vol. 162 (1986) in the
    NATO ASI Series published by Plenum Publishing Corporation, New York.

    Then the reader may refer to the author’s paper: ‘A Causal
    Theory for Neutron Diffraction’, Physics Letters A, 119, pp. 105-108
    (1986), before looking up those many other background papers in Lettere
    Al Nuovo Cimento.

    Indeed, for the reader who has a cosmological inclination, half
    an eye on the ‘missing mass’ problem, and believes that steady-state
    equilibrium by proton creation and decay is not compatible with the
    redshift indication of an expanding universe, the author’s paper that
    warrants special scrutiny is:

    ‘The Steady-State Free-Electron Population of Free Space’
    Lettere Al Nuovo Cimento, 41, pp. 252-256 (1984).

    Conclusion

    This Energy Science Report on Cold Fusion, in its Part I
    contribution to the ‘Power from Water’ theme, is intended to present
    some of the author’s relevant background theory in the scientific paper
    form in which it has already been published elsewhere, though the paper on
    the proton-deuteron abundance ratio is new to this work.

    As already stated, Part II will address other aspects bearing
    more directly on the technology of cold fusion, but this Part I
    material is an essential introduction to show why it is that the deuteron
    by its particle entourage can be partially embroiled in the electron-positron activity of free electrons in a metal host conductor. As
    already mentioned, one can see from the reference in APPENDIX E the
    situation where the core of the deuteron sits electrically neutral and
    bare of charge for moments in a fluctuating environment of charge,
    meaning that it is vulnerable to Coulomb barrier penetration by
    charged deuterons, so giving chance of fusion.

    Also, it is hoped that what has been said will cause some physicists
    to realise that existing knowledge of fundamental physics has its
    limitations but that ‘cold fusion’ research could well give us the
    added stimulus leading to the needed insight into the forces at work in
    creating the hydrogen nucleus and so understanding Creation on its
    universal scale.

    The reprinted papers forming APPENDIX D and APPENDIX E, are
    copied with the kind permission from the Editors of the Hadronic Journal.

    26th April 1994

    DR. HAROLD ASPDEN

    ENERGY SCIENCE LIMITED

    c/o SABBERTON PUBLICATIONS

    P.O. BOX 35, SOUTHAMPTON, SO16 7RB,

    ENGLAND

    APPENDIX A

    [The text here in the printed version of this Energy Science Report No. 5 was copied from pages 44-51 of the author’s 1975 book ‘GRAVITATION’]

    These pages can be seen in pdf format by using the following link:

    *******************

    APPENDIX B

    THE DEUTERON-PROTON RELATIVE ABUNDANCE


    Introduction

    We begin by asking a question:

    ‘Bearing in mind that the chemistry, meaning the chemical-bonding
    affinity, of heavy water is identical to that of ordinary water, would
    a human being be: (a) more healthy, (b) less healthy or (c) as healthy if the
    water intake to the body were to be heavy water rather than
    ordinary water?’

    As we approach the 21st century our scientific knowledge should
    have an answer to this question, especially as we know physicists are
    trying to solve our energy problems by nuclear fusion processes which
    utilize heavy water.

    Putting the above question rather differently:

    ‘If a wealthy man were to create an environment in which he spent
    most of his time with no exposure to heavy water, meaning that all
    deuterium oxide or hydrogen deuteroxide is removed from the ordinary
    water supplied to that environment, could he expect to benefit
    healthwise and live longer?’

    Perhaps, unknown to this author, the answer to these questions is to
    be found somewhere on university library shelves. The author, in giving
    limited consideration to this question, referred to a textbook in his own
    possession, the third edition (1957) of ‘Physical Chemistry’ by Walter J.
    Moore, Professor of Chemistry at Indiana, published in the original
    American edition by Prentice-Hall Inc. of New York.

    An end-of-chapter question on page 249 reads:

    ‘A normal male subject weighing 70.8 kg was injected with
    5.09 ml of water containing tritium (9×109 cpm).
    Equilibrium with body water was reached after 3 hr when a 1-ml sample of plasma water from the subject had an
    activity of 1.8×105 cpm. Estimate the weight per cent of
    water in the human body.’

    The triton is the atomic nucleus of tritium, the third isotope of
    the element hydrogen, so, in a sense, one can infer from the latter
    exercise question that the body intake of very heavy water involving
    tritium will make the body radioactive and that cannot be good for
    one’s health. Yet the very fact that this exercise question appears in
    a university textbook does suggest that water containing a
    concentration of tritium can be used in clinical testing. Our interest
    in the health implications of deuterium is therefore warranted.
    Deuterium is not radioactive but we still have a valid and unanswered
    question in wondering if heavy water is in any way damaging to health.

    Deuteron Fission and Fusion as a Natural Phenomenon

    In that same ‘Physical Chemistry’ textbook and chapter 9, with its
    appended questions, we read on p. 244 about the ‘energy production of
    stars’. Two nuclear processes are described as being alternative
    possibilities. One involves a process in which C12 and H1 fuse to produce
    N13 which in turn decays to C13 with the emission of a positron before
    experiencing further regenerative fusion and decay iterations with
    hydrogen and oxygen to yield ultimately He4. The other involves the
    fusion of two protons to produce a deuteron and a positron, also
    followed by the synthesis of He4.

    It is said that the first of these, the carbon cycle, is the source
    of energy in very hot stars, whilst the second involving deuterons
    applies to somewhat cooler stars like our sun. Amongst the steps in the
    stellar carbon cycle there is one in which C13 combines with H1 to yield
    N14 before the latter combines with H1 to produce O15.

    Now, moving back to Earth and those end-of-chapter questions we
    read:

    ‘According to W. F. Libby [Science, 109, 227 (1949)] it is
    probable that radioactive carbon-14 (mean lifetime 5720
    years) is produced in the upper atmosphere by the action of
    cosmic-ray neutrons on N14, being thereby maintained at
    approximately constant concentration of 12.5 cpm per g
    of carbon. A sample of wood from an ancient Egyptian
    tomb gave an activity of 7.04 cpm per g of carbon.
    Estimate the age of the wood.’

    The significance of this is that the physics of carbon-14 dating
    depends upon the transmutation of atomic nuclei and the probability of
    events involving exposure the atomic nuclei to bombardment by energetic
    stimuli. Now, in simply assigning a mean lifetime to a particular
    nuclear decay process the physicist can be hiding ignorance of something
    behind his presentation of empirical fact. He knows that there is decay
    and can measure the mean lifetime involved, but we are not in every
    case told why that decay occurs. Yes, we are told that the cosmic-ray
    neutrons create C14 from N14, presumably by emission of a positron, but
    we are not told what it is that sporadically bombards the C14 once it
    is protected from exposure to the elements and which somehow triggers its
    eventual decay.

    There is, quite clearly, something in our non-cosmic Earth
    environment that can activate nuclear fission and possibly nuclear
    fusion reactions. This may be that mysterious something we call the
    ‘neutrino’ but one really must wonder whether that term ‘neutrino’ is
    scientific ‘mumbo jumbo’ for what could be described as ‘a sporadic
    intruding influence of an energetic interaction with an all-pervading
    field background’. The advancement of energy science may depend upon
    the development of a better understanding of that intruding influence,
    because it surely must account for nuclear energy transactions which
    can occur at normal temperatures as in that ancient piece of wood of
    the carbon-dating example.

    It is not very satisfying to be told that, inasmuch as energy and
    momentum equations would not otherwise balance, there is need to
    recognize the existence of particles we call ‘neutrinos’ or the even
    enigmatic ‘neutrons’. There was in pre-20th century science the firm
    belief in the existence of an aether medium which common sense suggested
    as that ever-present hidden underworld which could sustain electric field
    oscillations travelling through a vacuum. In a sense, the modern
    physicist has replaced that aether with a collection of imaginary
    particles, whether termed ‘neutrinos’ or described as being ‘virtual’
    which are the unseen denizens of the vacuum state which we can refer to
    to ‘take up the slack’ created by gaps in our scientific knowledge.
    Yet, is the conventional picture of that virtual ‘neutrino-inhabited’
    quantum sea correct?

    Let us return to our problem and focus attention upon the
    transmutation of the hydrogen and deuterium nuclei, meaning the process
    deemed to occur in the sun by which two protons fuse with release of
    energy and a positron (or so-called beta-plus particle) to become a
    deuteron. Also meant is the reverse process by which, given the right
    stimulus, a deuteron can convert into two protons by emitting an
    electron or so-called beta-minus particle. The latter possibility as
    a natural process is suggested by the observation that the abundance
    ratio of deuterons to protons is the same for matter found in comets
    as it is for matter on Earth
    .

    What universal process determines this ratio and keeps it constant?
    Are we, instead, to believe that the ratio is one which evolves and so
    changes, in which case we should try to explain why the comet presents the
    same ratio as the Earth. Are we to believe that there was a Big Bang
    in which the ratio of protons to deuterons was fixed in an atomic soup
    which was stirred to a uniform and final mixture before the cometary
    matter and the Earth condensed from that common nebulous mixture?

    In the absence of verifying laboratory tests we shall never know
    for certain the answer to these questions, but one can say that there is
    more than the glimmer of a solution if the abundance ratio actually
    measured can be deduced in the manner and style of the solution of those
    end-of-chapter questions in that textbook entitled ‘Physical Chemistry’.

    So, we now set our sights on explaining the proton/deuteron
    abundance ratio ducumented at page 9-65 of the 1967 second edition of
    the McGraw-Hill ‘Handbook of Physics’ edited by Condon and Odishaw.
    According to this reference work, in every ten million atoms containing
    hydrogen and deuterium there are 9,998,508 nucleated by protons and
    1,492 nucleated by deuterons.

    The conditions governing the fusion and fission of these atomic
    particles must involve the element of chance, in that a combination of
    events conducive to decay must occur as a probability function,
    bringing about actions involving energy in a form which can materialize
    or dematerialize in integer quanta we associate with decay particle
    products (those beta particles).

    Note that we speak of ‘decay’ both for the fission and the fusion
    process as if decay can be a two-way or reversible operation. This has
    meaning only if the real form of the proton and the deuteron is that of
    a system which overall exhibits the stability of single-form existence
    but yet which, inherently, undergoes cyclic alternations of state, as
    between a ground state and one of greater energy.

    Much more will be said about this subject in this and later work
    and by reference to the author’s published papers, but the reader may here
    consider two basic facts known to the particle physicist. These are (a)
    that the deuteron exhibits a nuclear magnetic moment that is about
    6/7ths of that expected in relation to its spin property and (b) that
    the proton exhibits properties suggesting it is composed of three charges,
    rather than a single charge. (See APPENDIX E and the Feynman
    reference in APPENDIX A).

    The deuteron property implies that it has a state for one seventh
    of the time in which its positive charge becomes that of a satellite
    beta-plus particle that has been transiently displaced from the mass of
    its core, which thereby reacts as a neutral charge in its spin response
    during that limited transient period.

    The proton property suggests a ‘quark’ composition which this
    author prefers to see as being that of a proton charge +e aggregated
    with a (+e, -e) charge pair in the form of a beta-plus and a beta-minus
    particle or, in the alternative, an antiproton charge (-e) aggregated
    with two beta-plus (+e) particles.

    For the actual proton this implies alternation between two
    states in one of which the mass-energy is slightly greater than the norm
    of that of a bare proton charge standing in isolation and in the other
    of which the mass-energy is slightly lower than that norm.

    For the deuteron, there are three alternative states, (a) one of
    lowest energy, the ground state, in which two antiproton charges are
    aggregated with three beta-plus particles, (b) the neutral state, of
    greatest ‘core’ energy, where a (+e, -e) beta-particle charge pair sits
    between an antiproton charge and a proton charge in the near presence
    of a satellite (+e) beta-particle, and (c) the third energy state for
    which two proton charges are aggregated with an intermediate beta-minus
    particle with the (+e, -e) beta-particle charge pair in a satellite
    position.

    These particle ‘models’ are justified on other grounds in
    APPENDIX E, but they serve here to give basis for our understanding
    that a system of protons in a suitable combination of states can serve
    collectively to permit a balanced energy transition involving the
    creation of the deuteron in its least energy state. Similarly, it is the
    transiently neutral state of the deuterons which permits their reaction
    in an energy balanced transition which regenerates the proton.

    To formulate the resulting abundance ratio of H1: H2 we write:

    H1/H2 = (S1N/(S2n)(P1/P2) ………………. (1)

    In the above equation:

    S1 is the factor signifying the incidence of state when a transition
    can occur involving the proton (this having the value 2 because there
    are two equally probable states).

    S2 is the factor signifying the incidence of state when a transition
    can occur involving the deuteron (this having the value 7 because the
    deuteron is in its vulnerable neutral core state for one seventh of the
    time).

    N is the number of protons that need to be subjected
    simultaneously to the transition stimulus of the energy fluctuations
    in the environmental field background in order to secure the energy
    balance conditions needed to assure a transmutation.

    n is the number of deuterons that need to be subjected
    simultaneously to the transition stimulus of the energy fluctuations
    in the environmental field background in order to secure the energy
    balance conditions needed to assure a transmutation.

    P1 is the net number of protons created by collective action in
    a transition event.

    P2 is the net number of deuterons created by collective action in
    a transition event.

    The evaluation of the four parameters N, n, P1 and P2 will be
    our task below, but, to show the power of the argument being pursued,
    it is of interest to recite the calculated values first. They are:

    N = 35 n = 8 P1 = 18 and P2 = 16

    Putting these in equation (1) gives the result:

    H1: H2 = 6705 : 1

    which corresponds to a deuteron abundance factor of 1491 parts per ten
    million compared with the observed factor of 1492.

    This result is, at least in this author’s opinion, a very
    significant scientific contribution.

    It means that the physical processes that can occur in the oceans
    of the Earth can establish this equilibrium ratio as between the abundance
    of protons and deuterons to cause the heavy water content of the sea
    to be a natural physical quantity maintained at a constant value.
    One needs, of course, to apply the underlying theory to estimate the
    time constant of the exchanges leading to equilibrium. This is measured
    in thousands of years so that one can feel confident that a laboratory
    store of deuterium hydroxide or heavy water will not convert into
    normal water too readily.

    More important, however, there are implications for the Big Bang
    theory of cosmic evolution and for energy generation by so-called
    ‘cold-fusion’ methods, if deuterons and protons can undergo mutual
    transmutation at the temperature of our environment. The abundance
    ratio could not be computed by theory in the way suggested unless such
    transmutations do occur and, it may be noted, none of those high energy
    neutrons which are deemed so important in high energy physics are involved
    in the processes discussed.

    The Significance of the Deuteron Algorithm

    The reason for terming the formulation in equation (1) as an
    ‘algorithm’ is the author’s way of saying that what has been discovered
    is the short-cut route for solving a problem which, by orthodox
    methods, would otherwise involve vast amounts of computer time. That
    is assuming that the route to a solution by computer methods has been
    devised and, as concerns the proton/deuteron abundance ratio, scientists
    seem not, as yet, to have appreciated that the problem is amenable to
    solution.

    It is traditional in particle physics which involve hadronic mass
    calculations for problems to be approached by iterative techniques
    which take account of a vast number of interacting factors. This will
    be better understood when we come to discuss what it is that determines
    the proton/electron mass ratio. The algorithm we will use for solving
    that problem is the ‘jewel in the crown’ amongst the arsenal already
    mentioned. It has devastating implications for orthodox scientific
    doctrine founded on so-called ‘quantum chromodynamics’.

    However, as the scientific world knows from the hostility and
    resentment aroused against the claims of Professors Fleischmann and
    Pons for daring to imply that deuteron cold fusion had been discovered,
    there is readiness to scorn progress in science that challenges cherished
    beliefs.

    Where the only product is an intellectual accomplishment
    expressed as an equation which presents the numerical value of what is
    a very fundamental dimensionless physical constant, then the wrath of
    the establishment scientist can reach its zenith. The modern computer
    allows one, by trial an error, to probe all combinations of numbers,
    if one is willing to indulge in exercises that are arithmetic in character
    rather than physically founded. It follows, by the doctrine that if
    something is possible it will eventually happen, that scientists assume
    the trial and error arithmetic exercise is at the root of any claim to
    have deduced a physical constant.

    They lack credulity and show no tolerance when one makes a claim
    to explain the numerical value of a physical constant. What, they
    ask, is the merit of deducing the value of a quantity having a
    particular meaning in physics when the value of that quantity is already
    known to high precision from our experimental measurements? They argue,
    therefore, that to find acceptance one must, before it is measured,
    predict a numerical value of a constant having real physical meaning,
    so that eventual measurement of that quantity will deservedly command
    attention.

    This is not a logical posture, given that there are a limited
    number of truly dimensionless fundamental constants in physics, all of
    which have been already measured to very high precision. It is not a
    logical posture because it means that we are denied the hope of ever
    allowing a simple algorithmic approach to confirm to us the discovery
    of insight into the factors which govern the constant of gravitation,
    Planck’s constant and that proton/electron mass ratio already
    mentioned. It is, however, deemed acceptable to allow the
    supercomputer to try to decipher the mysteries of Nature whilst feeding
    it with mathematically elegant instructions designed to test artistic
    notions of symmetry.

    That said, the author challenges the reader to examine equation (1)
    and consider the skill needed to contrive its discovery and the choice of
    parameters had the author really probed the problem by exercising a
    computer.

    Firstly, consider the simplicity of the equation and its symmetry
    as between the proton-deuteron transition of the numerator and the
    deuteron-proton transition of the denominator. Then consider the
    chances, with an arbitrary choice of integer numbers for S1, S2, N, n,
    P1 and P2 of finding the correct solution and, after choosing an
    appropriate combination of integers, consider the scope for devising a
    plausible physical model giving meaning to the integers selected. Note
    that the author could have put the integer 9 for P1 and the integer 8
    for P2, if the basis of the formulation had not developed from direct
    physical analysis.

    One may wonder what solution the trial and error computer search
    would have found had the objective been set for this general equation
    to give the right answer to within the one in thousand precision assuming
    any integer combination other than that presented above is to be regarded
    as valid. There are in fact many possibilities but then one confronts
    that formidable task of justifying in physical terms which combination
    applies and how each of the numbers chosen has a valid role in
    determining the proton-deuteron abundance ratio. In the absence of a
    tentative model to guide one’s endeavours that is not a worthwhile
    pursuit.

    Physicists are not loath to wasting time on such a project,
    judging by the attack on the theoretical value of the dimensionless
    quantity incorporating Planck’s constant. This is a reference to α-1,
    the constant we know as having the value 137.035 9895(61). In 1970 a
    physicist named Wyler claimed a derivation for this constant as 137.036
    082 by presenting a formula including the quantity π and the integer
    numbers 1, 2, 4, 5, 8 and 9. As is explained by Petley in his 1985 book
    ‘The Fundamental Physical Constants and the Frontiers of Measurement’,
    it was in 1971 that Robertson, Roskies and Prosen brought disrepute to
    such work by arbitrarily sythesizing values of α-1 with the aid of a
    computer. Using a similar format to Wyler’s equation, given some
    ground rules and arbitrary combination and choice of 11 integer numbers
    and further including , the computer found 6 values of α-1 all
    closer to the measured value than was Wyler’s value. The integers
    ranged up to 19 in value and one can but deplore this ‘numbers game’
    exercise, as a means for suppressing genuine physically based endeavour
    by those who seek to solve the great mysteries of physical science. The
    fine structure constant α concerns the action we associate with
    Planck’s constant. It is at the very heart of the Energy Science to
    be discussed in these Reports.

    It is with that background in mind that the author invites the
    reader to examine the algorithm presented in equation (1) and consider the
    problem of devising a physically meaningful result in such good accord
    with the measured value, if that accord were fortuitous.

    However, for the benefit of the reader who seeks the truths of this
    situation, we will first summarize the process involved and then begin
    the analysis of the energy transactions which govern equation (1).

    How is it that protons can transmute into deuterons and vice
    versa as an ongoing natural process, when the mass-energy of two
    protons exceeds that of the deuteron?

    The reason is that, owing to vacuum energy fluctuations, both
    the proton and the deuteron are constantly experiencing changes of
    state in which they have slightly changed mass-energy.

    It so happens that the highest energy state of the deuteron which
    applies for one seventh of the time is one for which the energy is higher
    than twice the lowest energy state of the proton. The proton ground
    state applies for what is virtually half of any period of time. The
    other half is spent in its higher energy state and it flips cyclically
    between the two states halting very momentarily between these two states
    whilst in its ‘bare proton’ form. The presence of beta particles when
    in either of the two principal energy states account for the mass
    differences.

    Accordingly, the deuteron to proton transformation occurs when
    the deuteron is in its highest energy condition. Conversely, the protons
    cooperate in creating a deuteron by action focused on the deuteron
    ground state.

    The analysis by which these actions can be fully understood does,
    therefore, require the background study of the state composition of the
    proton and the deuteron.

    For the purpose of this Report, it suffices here to refer to
    APPENDIX A in which the author discusses the three-part proton and
    APPENDIX E, which concerns the deuteron.

    As to the proton, the ‘bare proton’ has a definite mass that is
    1836.152 times the electron mass, as calculated in APPENDIX D, but,
    by reference to Feynman in APPENDIX A, we saw that the proton in its
    normal state behaves as if its charge is spread between three centres.
    In fact it is alternating between states, being at times a bare proton
    charge and at other times having close association with an electron-positron pair and even in another state becoming an antiproton coupled
    to two positrons – or rather beta-particles, because physicists prefer
    not to think of electrons and positrons as being nuclear constituents.

    In its beta-particle association it has a mass increased in one
    state by a value very close to 0.25 electron mass units and decreased
    in the other least-energy ground state by very nearly 0.25 electron mass
    units. For the purpose of the calculations of the deuteron-proton
    transmutations the time spent in the intermediate ‘bare proton’ state,
    in order to keep the overall mass-energy balance at a mean value
    equal to that of the ‘bare proton’, is quite negligible.

    The reader is here reminded that particle physicists picture the
    proton as comprising quarks as if it has three separate fractionally
    charged components. This author urges the reader to think in terms of a
    proton which changes form between three states in each of which its
    component charges are unitary at all times. This author is urging the
    reader to keep in mind that charges can be created and annihilated in
    pairs and that this is a property of the beta-particles known from
    quantum electro-dynamics. It needs little imagination to recognize
    that such charge transmutations occur inside protons and deuterons and
    that there could even be some polarity inversion or exchanges involved
    between beta-particles and protons when they are so closely bound
    together in atomic nuclei.

    Physicists who believe in fractionally charged quarks are leaping
    into the unknown and making unwarranted assumptions. All the evidence
    points instead to transmuting forms of unitary charge particles which
    only appear on a statistical average to be fractionally charged.
    They are, in fact, exchanging energy with nearby charges and
    participating in vacuum field effects of pair creation and annihilation
    activity. Therefore, they exhibit behaviour reflecting their average
    condition. Of course, when they emerge from their bondage in the
    composite particle form they must appear as unitary charges, which
    explains why the so-called quark has never been isolated in any
    experiments.

    Just as the physicist assumes that there is a neutron in the
    deuteron, so he has assumed that there are quarks in the proton. That is
    ill-founded assumption which can be remedied by understanding what is
    offered in this text as an explanation for proton-deuteron quantities
    which can be measured against the theory.

    We now delve into the detailed analysis leading to the prime
    formula specifying the natural proton:deuteron abundance ratio.

    Energy-Balance Criteria

    It will be argued that, for the simple particle structures
    involved in the deuteron and proton states, we can assume for close
    approximation purposes, that energy transactions between these two
    particle forms involve quantities corresponding to quarter units of
    the rest mass-energy of the electron.

    Should the reader question this it may help to refer to another
    older textbook, this being ‘Modern Physics’ by Professor H. A. Wilson of
    the Rice Institute at Houston, Texas, reprinted in 1946 from the 1937
    second edition (publishers Blackie & Son Limited, London).

    It is at p. 261 in the chapter on Atomic Nuclei that Wilson begins
    to discuss the fact that the energy released in nuclear reactions,
    particularly those involving the lighter atoms, is nearly always in
    approximate multiples of 0.387 MeV. This is 0.757 units of electron
    rest-mass energy, but, for reasons that will later become apparent,
    we will assume that this corresponds to three of the quarter units just
    mentioned.

    It seems quite logical, therefore, to look to the electron or the
    positron (that is, the beta particles associated with nuclear decay) as
    providing the ‘glue’ or binding energy holding the heavy charges (the
    hadrons) together in an atomic nucleus.

    The deuteron when bombarded by very high energy from a radioactive
    gamma ray source breaks up by emitting two heavy particles, one being
    the proton. The other heavy particle is a neutral entity which we call
    the ‘neutron’. The neutron is unstable and has a mean lifetime of
    about 15 minutes, breaking up into a proton and an electron. It
    follows from this that one could say that a deuteron comprises two
    protons and an electron. Remembering then that the proton is deemed to
    comprise three charged components it is not unreasonable to believe
    that, when it stands in isolation, it comprises a heavy positive charge
    in close association with an electron-positron pair or a heavy negative
    particle closely bound between two positrons. In this scenario the
    ‘neutron’ can be a neutral aggregation of an electron and one of
    these proton forms.

    We come therefore back to the rather simple proposition that
    electrons and positrons exist in atomic nuclei and account for the
    binding energy which holds the protons and antiprotons together. There
    are no neutrons, as such, in atomic nuclei.

    Now, based on Table II in APPENDIX E, it can be seen that we
    can state the highest and lowest energy states of the deuteron in terms
    of their ‘proton’ P unit composition and ‘electron mass units’ E. The
    latter are units of 2e2/3a, so that state A has energy 2P+3E-35E/8
    because the deuteron, as such, incorporates three -particles. State
    C has energy 2P+E-18E/8, there being only one -particle in the deuteron
    core. The intermediate state B deuteron has an energy 2P+2E-25E/8,
    which is the highest. In contrast the proton has a least energy P+2E-9E/4 and a highest energy of P+2E-7E/4.

    Consider now the action needed to produce ground-state deuterons
    from protons which have net energies of -E/4 or +E/4. The action we
    contemplate will involve no net energy exchange in the transmutation
    process, but may involve fluctuations of energy. Also, we will
    presume the decay of protons is conditioned at an energy level matching
    that at which protons are created, this is in their bare charge form with
    no satellite electrons or positrons. The proton input to the deuteron
    creation process must then involve an even number of protons involving
    equal participation of those with net +E/4 energies and -E/4 energies
    (meaning -7E/4 and -9E/4 interaction energies, respectively).

    Our deuteron creation reaction will involve N three-charge
    protons creating x ground-state A deuterons plus y bare electrons or
    positrons and z residual protons in their bare charge state.

    The rules governing a decay process of this kind are that the space
    occupancy by electron and positron charge and so their intrinsic energy
    content must be conserved, as must interaction energy separately and
    the numbers of bare proton or antiprotons. Noting that the deuteron
    ground-state interaction energy is given by -35E/8 and that its
    electron/positron content is 3, so one can then write:

    space conservation: 2N = 3(x) + y …………………………….. (2)
    energy conservation: (35E/8)x = (7E/4 + 9E/4)(N/2) …….. (3)
    proton conservation: N = 2(x) + z ………………………………. (4)

    It requires simple algebra to find the solution for minimal
    residue, meaning z is minimum with N finite. It may be verified that the
    following combination of numerical values satisfies the three
    equations:

    (x) . (y) . (z) . (N)

    16 .. 22 .. 3 .. 35

    32 .. 44 .. 6 .. 70

    From this one finds the unique solution, which is that a trigger
    event involving 35 protons produces 16 ground-state deuterons. The
    protons can be in either of two states and at their transition through
    the bare state some will be tending to increase energy and others will
    be tending to decrease energy. This trigger event occurs when all 35 are
    in the same transient increasing energy state, meaning an event
    probability factor, the inverse of which is proportional to the numbers
    of protons in the equilibrium system. The latter factor is (2)35.

    The reverse process involves a vacuum field fluctuation
    supplying 0.511 MeV of energy as part of the trigger event by which
    deuterons in their transient highest energy B state are raised to the energy
    level at which they can transform into proton pairs. There is a
    governing requirement for other transient energy input in paired units of
    the electron rest-mass energy quantum E = 0.511 MeV and a need for
    charge parity by a transformation of the C state deuteron form into
    the ground-state A form.

    Note that a neutral B-state deuteron core without its satellite
    beta-plus particle has a net energy of 2E-25E/8 or -9E/8. Therefore
    the addition to a group of 8 such deuterons of the mass energy carried
    by 9 beta-plus particles will correspond to an event which brings the
    energy into balance with that of 16 protons, suggesting that this could
    be the process by which deuterons transmute into protons.

    The ongoing energy fluctuations in the electron-positron field
    will allow the energy of those 8 satellite beta-plus particles to
    redeploy into electron-positron pairs in the quantum-electrodynamic
    background which sources the 9 beta particles, as the positive charge
    transfers to the proton product. On balance only one 0.511MeV unit
    E of field energy is needed to simulate the deuteron-proton conversion.

    The action described can, therefore, on energy balance criteria,
    create 16 protons from those 8 deuterons, but only if there is a net
    unit electron rest-mass energy input and a complementary reaction which
    can take up the surplus unit of positive charge.

    Since the net core deficit energy of the C state deuteron is E less
    9E/4 or -10E/8 and that of the A state deuteron is 3E less 35E/8, which
    is -11E/8, the transition of 11 C state deuterons to 10 A state
    deuterons with the shedding of two protons will occur with no energy
    residue. However, in this case the reaction product requires an input of
    one unit of positive charge.

    It follows that, at least in theory, the state transitions of the
    deuteron could, in the normal ongoing QED field activity, give reason
    for expecting protons to emerge from natural fission of deuterons but
    the statistics of such an event are set by the chance combination of 8
    of the B-deuteron states. Then 16 protons will emerge directly from
    those B-state deuterons and 2 protons will emerge from the very
    frequent C-state to A-state transitions. The event will mean that
    protons are created in batches of 18 from these events.

    Each deuteron is in the B-state for 1/7th of any period of time.
    This yields an event factor giving measure of deuteron population as
    (7)8 since 8 deuterons collectively are the target for the primary
    reaction.

    Combining these results one finds that S1 and N in equation (1) are
    2 and 35, respectively. Furthermore, P1 in equation (1) is 18.
    Similarly, S2 is 7 and n is 8 in equation (1) with P2 as 16.

    The overall ratio of proton to deuteron in the equilibrium state
    can then be expressed by the contracted quantity 9(16/7)8, which is 6705
    as a proton to deuteron ratio or 1491 deuterons per ten million
    protons.

    As already stated above, this compares with an experimental
    abundance ratio assessment of 1492 per ten million.

    The General Parity Criteria

    It is important to appreciate, when dealing with problems
    involving the background zero-point energy field, that the energy
    balance is the primary regulating factor. There can be energy
    fluctuations but, so far as the energy locked into the matter form is
    concerned, this is conserved in the overall picture of things.

    Charge parity and the parity of space occupancy associated with
    electron-positron charge forms are less important to individual energy
    processes of the kind just described, though these too must be balanced
    on a collective less-local basis.

    For example, an electron and positron can, together, be seen as
    a neutral charge entity and yet two space quanta are involved.
    Conversely, two space quanta can be occupied by charge of the same
    polarity, meaning that a given even number of space quanta can all
    be occupied, and then there can be a net charge out-of-balance.

    If one says that 35 normal three-charge protons can transform
    into 16 deuterons plus 3 bare single-charge protons, there is a net
    charge deficit of 16 units of charge e. However, we are also saying
    that the reverse event can occur in which batches of 8 plus 1 deuterons
    convert into 18 protons. Both batch processes are occurring together
    in the deuteron/proton environment and so, allowing for transient
    leptonic (electron-positron) activity in the QED field background (see
    section III of APPENDIX E) the charge condition balances overall.
    Similarly the space occupancy condition is a self-balancing process
    in our stable local field environment.

    Should one ask whether a litre of heavy water will be
    transmogrified into normal water by the processes suggested above one
    must answer affirmatively. The real question is that of knowing the
    time scale involved.

    Here one can estimate the time rate of these events by noting that
    an event time factor of the order of 10-13 seconds characterizes the
    single electron transition in the quantum field background. It can
    decay at A and be recreated at B as if it jumps from A to B in that
    period.

    The three-charge proton and state B deuteron decays discussed
    above centre on a pairing of two electron-sized charges in each of these
    particle forms. The governing frequency of the background field is
    that corresponding to a photon of energy equal to the rest mass energy
    of the electron. The chance factor for a single electron as target
    for an energy fluctuation is about 1 in 107, meaning that there are that
    many cycles of that electron Compton frequency in the 10-13 second
    period of the electron lifetime.

    Therefore, we can estimate that every 10-6 seconds every proton and
    B state deuteron will be a candidate for transmutation. For there to
    be transmutation, however, the target particles have all to be in the
    same state and this is governed, for the proton, by that factor above
    of (2)35. This means that, on average, a proton will withstand
    participation in the deuteron creation process for a period of (2)35
    microseconds, which is about 10 hours.

    This period reduces to a few seconds for the converse process by
    which deuterons should transmute into protons in water that is nearly
    100% deuterium oxide. It is only in the composition of the equilibrium
    mixture that the proton transmutation time rate applies for the
    reciprocal transmutations. Clearly, therefore, the question arises as
    to why heavy water does not convert into normal water on a time scale
    measured in minutes.

    The answer to this is connected with the problem confronting the
    ‘cold fusion’ issue. When deuterons transmute into protons in the
    recognized way, energy has to be added by gamma radiation and the
    products are one proton plus one neutron. ‘Cold fusion’ has posed the
    question “Where are the neutrons?”. It would seem that what happens in
    the world of very high energy collisions is not the same as events in the
    cool conditions of a medium at water temperature.

    In the sea the process described above can occur to keep the
    equilibrium between the deuterium oxide D2O, hydrogen deuteroxide HDO and
    hydrogen oxide H2O. The charge imbalance is there avoided by the
    recriprocal transmutation but one must assume charge fluctuations
    involving the atomic nuclei in exchanges with the background field.
    Possibly this activity has connection with the many reported energy
    anomalies found in experiments with water, particularly those involving
    electrolytic action.

    In high energy physics of deuteron transmutation the charge issue is
    avoided by the action we term the ‘neutron’, which this author must
    assume is really a proton or antiproton neutralised by an
    accompaniment of electron(s) and/or positron(s).

    However, we still ask the question “How long will it take before
    a kilogram of heavy water converts to a 50% mixture of heavy water
    and normal water?” Note that this question is put in terms of weight
    because the overall volume of the water would increase as deuterons
    change into protons. Furthermore, unless there is neutron emission, there
    would be release of hydrogen gas unless oxygen were to be absorbed. The
    answer must lie in the understanding of the source of the added positive
    charge taken up by the newly created protons. If this source is sluggish
    in providing that charge then the transmutation rate will be retarded.
    It may be measured in hundreds or thousands of years under normal
    environmental conditions or where water is sealed in a container.
    Equally, it may be a matter of days only where the heavy water is
    absorbed into a palladium host metal carrying electric current.
    Accordingly, one must wonder if the charge adjustments applicable
    where protons convert into deuterons and vice versa affect adjustments
    to the natural equilibrium ratio of protons to deuterons and see how
    this affects the ‘cold fusion’ deuteron transmutation process.

    This Part I Report will not enter into speculations on the
    technological implications of the latter issue. The main point made
    in this contribution is that the ratio of protons to deuterons which
    occurs naturally is not an arbitrary consequence of disorder in the
    evolution of historical events. It is a fundamental physical constant
    determined by the same regulating factors which fix constants such as the
    proton/electron mass ratio.

    Footnote

    In the paper which follows as APPENDIX C the deuteron features
    as a component of the triton and the decay of the triton is related to
    events by which the deuteron is itself affected by the mu-meson
    bombardment.

    There is a fundamental difference in that action compared with the
    situation above. Whereas the beta-particles are really the target
    affected by those mu-mesons in the isolated proton and deuteron forms,
    when one considers these as part of larger atomic nuclei the decay
    action is dominated by mu-meson attack on a different and larger
    target which latches onto nucleons belonging to atomic nuclei having
    atomic mass number of 3 or more.

    Though this may sound complicated, in the limited space available
    in this Report, the author has deemed it best to present this Appendix B
    and Appendix C as separate self-contained texts and it is hoped that the
    reader will be able to follow the logic of the separate presentations
    even though study of the author’s other published work will be needed to
    fully comprehend the distinction.

    The threshold between 2 and 3 nucleons has a dynamic ‘gravity’
    balance connection with the ‘graviton’ mass developed in Fig. 7 as
    shown in APPENDIX A. The ‘larger target’ involved in proton
    creation, one larger than the electron or beta-particle, by 1843 in
    volume, is explained on page 40 of APPENDIX C and more fully on the
    second page of APPENDIX E. The relevance of the latter to the
    deuteron as a component of the triton is that it brings about the
    actual creation of a proton within the triton as a deuteron-proton
    composition. It is shown on pp. 42-43 that the mu-meson bombardment of
    that space lattice charge ‘target’ triggers the transient creation of
    a proton, on average, every 12.2 years. If this event occurs in the 3-or-more-nucleon core, so that may well involve a proton transfer and
    a nuclear transmutation. This is an action quite distinct from that
    described above where it was assumed that the two beta-particles in the
    proton or the B-state core deuteron were the ‘target’ for mu-meson
    attack.

    APPENDIX C

    [The following paper was presented at a conference held by
    ANPA, the Alternative Natural Philosophy Association, in
    Cambridge, England during 9-12 September 1993. Though the
    title refers to the ‘model proton’ the main thrust of this
    paper concerns the triton and theoretical derivation of its
    lifetime.]

    THE MODEL PROTON IN A NON-COMBINATORIAL HIERARCHY
    Harold Aspden

    The proton, as the primary form of matter, is at the creative
    equilibrium interface between matter and vacuum energy. Just as there
    is electron-positron pair creation and annihilation activity in the
    vacuum field, so there may be an underlying ‘heavy’ lepton (muon)
    activity in the universal field environment. This paper explores the
    relationship between the muon and the proton on the simple assumption
    that Nature is constantly trying to create protons but is normally
    restrained by energy equilibrium criteria.

    The author’s theoretical model is of long standing record, as
    outlined in Physics Today, November 1984, p. 15, and as acknowledged
    for its remarkable ‘classically-derived’ prediction of the proton-electron mass ratio in the paper reporting its measurement by Van Dyck
    et al, International Journal of Mass Spectrometry and Ion Processes,
    66 (1985) 327-337.

    The advance reported in this ANPA-15 paper concerns recent
    developments of this model which focus upon aspects of the deuteron and
    the triton. In particular, the model will be tested by deriving
    theoretically the 12 year lifetime of tritium on the assumption that it
    decays owing to interaction with that same heavy lepton field
    environment that creates the proton. This approach then affords insight
    into the exposure of the deuteron to that heavy lepton field activity.
    The quantitative aspects of the energy transactions involved are too
    remarkable to be attributed to coincidence.

    The advantage to humanity which such research affords is linked to
    the prospect of success now emerging from research on cold fusion,
    inasmuch as the theoretical processes envisaged explain why no neutrons
    result from what is deemed to be deuteron fusion. The consequences
    concern an alternative natural philosophy having bearing upon the
    forces of creation in the universe and are important in that by
    theorizing about the derivation of the proton mass in relation to the
    electron there is spin-off which can cause physicists to revise their views
    on nuclear theory.

    1. The Triton in Focus

    Tritium is the third isotope of hydrogen. It is radioactive but
    decays by releasing a minute amount of energy – about one thirtieth of
    what is needed to create an electron. Its nucleus, triton, is an enigma
    in physics. A portion of the energy it releases somehow vanishes without
    trace and this phenomenon has been the basis of the neutrino hypothesis.
    The fusion of hydrogen in the sun is believed to be the source of energy
    which powers our existence on Earth, but the supposed related neutrino
    emission from the sun is itself a problem. There is just not enough
    solar neutrino energy intercepted by our Earth to balance the energy
    books representing the solar hydrogen fusion hypothesis.

    It is submitted that the triton is the guardian of the secrets which
    govern our understanding of the cold fusion process encountered when
    deuterium is loaded into a cathode in a Fleischmann-Pons experimental
    cell.

    The triton has a lifetime of 12 years. That is a very important
    clue and it has caused this author to focus on the assumption that the
    triton incorporates a ground-state deuteron, which is the seat of the
    decay action. This means that the deuteron itself is subject to
    radioactive decay processes but, as will be shown, this decay action
    involves a proton creation followed by proton decay. What may then
    emerge as a cold fusion product is a tritium nucleus or the
    reestablishment of the deuteron in its orginal form. In other words, the
    deuteron appears stable, but it can develop into a triton by a
    natural lifetime process, albeit with very much higher probability if
    another deuteron in close proximity is available to sacrifice a proton.

    This proposal is not hypothetical. It is based on a theme
    developed in the author’s earlier work, published long before the
    Fleischmann-Pons cold fusion discovery was announced. See, for
    example, the American Institute of Physics journal ‘Physics Today’, 37,
    p. 15 (1984).

    There the author drew attention to the P and Q scenario where a
    proton of energy P was attracted to an oppositely charged partner of
    energy Q. If each has a charge e bounded by a sphere of radius a
    determined by the J. J. Thomson formula (E = 2e2/3a), the total energy
    of the P and Q charge in surface contact is:

    P + Q – 3PQ/2(P+Q)

    For the binding energy term to be a maximum, P and Q must have
    a certain relationship. This is when 1+Q/P is the square root of 3/2.
    The reader may then verify that with P as 1836 the value of Q is 413,
    which is the combined energy of a pair of mu-mesons in electron units.
    Resulting from this discovery the author has advanced elsewhere a theory
    of proton creation which explains how protons are built from the
    virtual muonic energy activity in the vacuum field. Note here that
    electron-positron creation and annihilation are ongoing activities in the
    vacuum field, the basis of quantum electrodynamics, and the mu-mesons
    are the ‘heavy electrons’ which hitherto have been seen in physics as having
    no role or function that could justify their existence in Nature. Their
    role is, of course, the most important of all, that of matter
    creation in the form of protons!

    Now, we are, in the description which follows, to see how this same
    process of proton creation is at work within a deuteron or a triton.

    The algorithm which the reader may keep in mind in the analysis which
    follows is the curious mathematical fact that 4Q, meaning four mu-meson pairs, if combined with the energy released by creating two (P:Q)
    systems from two bare P components, will be exactly that needed to
    create a new proton or antiproton P.

    To prove this write:

    P = 4Q + 3PQ/(P+Q) – 2Q

    Then rearrange algebraically as:

    P(P+Q) = 2Q(P+Q) + 3PQ

    or:

    3P2 = 2P2 + 4PQ + 2Q2 = 2(P+Q)2

    which is the above relationship between P and Q as calculated from
    minimization of energy potential.

    It follows, therefore, that if a particle containing two P
    nucleons is bombarded by the mu-meson vacuum energy background there
    is a condition where 8 mu-mesons will create a third P. This is
    tantamount to a fusion process occurring at room temperature which
    adds a nucleon to a deuteron.

    Note that the energy is ‘borrowed’ partially from the vacuum as
    a vacuum energy fluctuation and partly provided by the degeneration
    of two nucleons in creating the two Q dimuon components. The system
    will ‘restore’ by causing a proton elsewhere, as in a nearby deuteron,
    to decay, but for a transient period there will be a very active energy
    situation which can give basis for much that is observed in cold fusion
    phenomena.

    The remainder of this paper will develop the above theme by
    reference to the triton, and the verifying key which confirms what is said
    above is the resulting calculation of the 12 year mean lifetime for the
    transmutation just mentioned. This gives insight into the energy
    generation rate that can be expected in the cold fusion deuteron
    reaction. A deuteron will experience the mu-meson transmutation
    described on an average that is set by the triton 12 year lifetime. Since
    the deuteron is in the required ground state condition 2 parts in 7 of any
    period of time probable deuteron transmutation lifetime by this
    process is 42 years. However, one cannot exclude secondary nuclear
    reactions triggered by the excess energy transients of the above process. [Note: the 2 part in 7 factor is derived in the author’s paper The Theoretical Nature of the Neutron and the Deuteron, Hadronic Journal, 9 129-136 (1986). APPENDIX E of this Energy Science Report.]

    Note that the deuteron ground state is one in which the deuteron
    structure has two antiprotons sitting amongst three beta-plus
    particles, represented by (e+😛:e+😛:e+), and the process we are to
    consider is one where attack by 8 mu-mesons causes the outer beta-plus
    particles to become dimuon Q charges as a newly created P charge is
    nucleated from a nearby vacuum lattice charge. The latter will be
    understood from the following detailed description.

    The Constant Vacuum

    In the Winter 1992 issue of 21st Century one reads of an interview
    with Martin Fleischmann and his Italian theoretician colleague Giuliano
    Perparata on the eve of the Third Annual Cold Fusion Conference.

    This was an interview which revealed that we could expect a
    backlash from the criticism levied at the pioneer work on cold fusion.
    It has aroused retaliation which will take the form of an attack on the
    weaknesses of much that has become accepted in theoretical physics. The
    following two quotations from that interview will serve to set the
    scene for the subject developed in this paper:

    ‘There is something seriously adrift with modern theory.
    There is a lot of work to be done, lots more to be
    discovered.’

    ‘Preparata pointed to the hyperfine structure constant,
    alpha, which relates the electrostatic and electro-magnetic
    fields and is crucial in physics. “I often ask myself,” he
    said, not really joking, “What if the fine structure
    constant were like the Dow-Jones index and constantly
    shifted up and down? Then there could be no science and no
    rationality …. If it were not for constants such as the
    fine structure constant and the speed of light, then our
    universe would not exist.’

    Here then is a statement that should cause physicists to wonder and
    reason as to why the textbooks of science do not discuss the way in which Nature determines that fine structure constant and thereby is able to
    build our universe. The derivation of the value 137.0359 which is α-1, where α
    is 2πe2/hc, e being the electron electrostatic charge, h Planck’s constant
    and c the speed of light, is crucial to everything that is fundamental
    in physics. Next, in order of fundamental importance, there is the
    understanding which can come from the theoretical derivation of β, the
    proton-electron mass ratio, as 1836.152.

    In a 1985 book entitled ‘The Fundamental Physical Constants and
    the Frontier of Measurement’ published under the auspices of the
    Institute of Physics in U.K. B. W. Petley of the National Physical
    Laboratory describes the theoretical attempts to derive these
    dimensionless constants and states at page 161:

    ‘No doubt the theoretical attempts to calculate and
    will continue – possibly with a Nobel prize winning
    success.’

    Now, the reader may wonder how this concerns the triton and cold
    fusion. Well, perhaps Martin Fleischmann and Giuliano Preparata are
    unaware of the connection via this author’s work, but its very essence
    is a vacuum medium that bombards us with action and is a seat of events
    that trigger photon creation, thereby determining , and proton creation
    which determines . The Physics Letters, 41A, pp. 423-424 derivation of
    was published in 1972 and the theoretical derivation of was published
    by the Italian Institute of Physics under the title: ‘Calculation of
    the Proton Mass in a Lattice Model for the Aether’, in Il Nuovo
    Cimento, 30A, pp. 235-238 (1975).

    The first paper derived in terms of a resonance in a fluid
    crystal structure of the vacuum and the analysis involved knowledge
    of the lattice cell dimensions. The underlying research had already at
    that time solved the problem of gravitation and revealed that a
    virtual pair of mu-mesons had association with each cell and were the
    building blocks for hadronic matter including protons. Of particular
    relevance to the calculation of the proton-electron mass ratio in
    free space is the way in which, as a rare occasion governed by
    statistical chance, nine mu-mesons come together at the seat of a
    vacuum lattice charge to create a proton.

    Here then is Nature’s arsenal by which it can act, even from within
    our bodies, to bombard matter with mu-mesons. These are energy quanta
    which act in concert to strike the body blow which converts a tritium
    atom into helium 3 and a deuterium atom into tritium, in the process
    creating a new nucleon in an act seen as fusion but by promoting the
    decay of one elsewhere. Indeed, we confront a scenario where Nature is
    constantly trying to create protons throughout space but it only
    succeeds where the energy equilibrium as between the sub-quantum vacuum
    underworld and matter has become unbalanced. Generally speaking, if
    a new proton is created an old one somewhere nearby must decay.
    Therefore, if the nuclear chemistry suggests that an intruder proton
    moves to fuse with the deuteron so creating a tritium nucleus, the real
    event is probably one where the mu-meson attack on the deuteron has
    caused a proton to appear as a nucleon whereupon the energy equilibrium
    bookkeeper has ‘ordered’ the demise of that intruder proton.

    This may seem fantasy speculation, but the reader should be
    mindful of the power of the author’s published research by which those
    and constants were derived. The calculations matched the part-per-million precision of the measured values and were in exact accord.

    We can, therefore, proceed to study the triton with confidence and
    our objective, as with corresponding published work on the neutron, for
    example, is no less than the aim to confirm the theory by simultaneously
    deriving values for the magnetic moment, the mass and the lifetime of
    the triton.

    The reader can share in the author’s pleasure of discovery by
    working through this exercise, because the triton, rather curiously, lends
    itself to straightforward analysis.

    It is necessary to engage in some preamble to explain the factors
    involved but to keep the focus on the objective the argument will
    advance directly to the calculation of these three values and the
    reader is asked to keep in mind that the ultimate objective is the
    calculation of the triton lifetime. The deuteron component of the
    triton stands as the target and so much of what is discussed is addressed
    at the deuteron transmutation as if it has the same lifetime in its
    ground state.

    The Triton’s Vital Statistics

    The triton has a structure supporting three units of nucleon mass
    presenting an overall unit of positive charge e. Its mass is slightly
    less than that of three protons. Indeed, we should begin by working out
    precisely how much the measured mass differs from that of three protons
    as that provides the value we need to compare with the one derived
    theoretically.

    We will work in terms of mass expressed in terms of electron rest
    mass as a number ratio.

    The author’s data reference is the 2nd Edition of the McGraw-Hill,
    Condon and Odishaw Handbook of Physics (page 9.65).

    Atomic mass of proton plus electron: ……… 1.00782519
    Atomic mass of triton plus electron: ……….. 3.01604971
    Unit atomic mass in electron units: …………. 1822.888

    This latter value was found by dividing the first atomic mass
    into 1837.152…, which is the proton mass incremented by one electron
    unit.

    If we now multiply the first-listed atomic mass by 3 and
    subtract the second-listed atomic mass, the result is 0.007426 and
    multiplication by the unit atomic mass in electron units gives 13.54.
    This, therefore, is the measured mass difference as between 3 protons plus
    two electrons and the triton.

    It follows that the triton has a mass that is 11.54 electron mass
    units below the combined mass of three protons. Our task is to find the
    model form of the triton which allows us to calculate this mass
    discrepancy.

    The other items of data we need to extract from the same data
    source (page 9.93) is (a) the triton lifetime of 12 years, (b) the half-spin unit of angular momentum (presumed to be same as the proton) and
    (c) the magnetic moment stated in nuclear magnetons to be 2.9789.

    It is, however, better for us to avoid reliance on data that is
    based on indirect measurement and take note of the direct measure of the
    triton nuclear magnetic moment presented as a ratio in terms of the
    proton magnetic moment effective in the same reacting environment. This
    ratio, as quoted from the Dover 1966 text of ‘Atomic Physics’ by
    Harnwell & Stevens, is:

    1.06666

    The task ahead is then to guide the reader through the analysis by
    which the three measured numerical dimensionless values just presented as
    the triton’s credentials are duly derived by pure theory.

    The Magnetic Moment of the Triton

    It is appropriate here to refer to the author’s paper entitled ‘The
    Theory of the Proton Constants’
    , Hadronic Journal, 11, pp. 169-176,
    1988.

    On page 174 of this paper the gyromagnetic ratio of the proton
    is deduced theoretically as being 2.792847367, which compares with the
    measured value of 2.792847386(63) and so is quite precise, it being
    computed from a proton modelled on a structured resonant state.

    This, in effect, is the proton’s own magnetic moment expressed in
    terms of nuclear magnetons and so one can see that the 2.9789 triton
    magnetic moment above is derived from the measure 1.06666 and the
    independent measure of the proton’s gyromagnetic properties.

    Now, when we have regard to the fact that the triton’s magnetic
    moment is measured as a frequency ratio as between the reaction of a
    triton and a proton in the same magnetic field, there is the curious
    feature that the two frequencies have what appears to be a perfect
    integer ratio, namely 16:15, which is the near-unity ratio factor
    1.06666.

    This causes one to wonder whether the interfering wave modulation
    which would develop harmonic interactions somehow locks the response of
    the triton onto a condition that is exactly set by this 16/15 ratio,
    even though the true triton magnetic moment with no proton reaction
    present is virtually that of three nuclear magnetons.

    With this doubt, there is little purpose in trying to derive the
    precise quantity 2.9789 and it suffices for our purposes to justify, if
    only as an approximation, the triton magnetic moment as being 3
    nuclear magnetons.

    The interesting point to then take into account is that amongst
    all atomic nuclei the triton is unique as having by far the largest
    magnetic moment in relation to its nuclear angular momentum. The
    ratio is 6:1, whereas Ag108, which sits between the two stable isotopes of
    silver, has a half-life of 2.4 minutes and comes closest with an
    exceptionally high ratio factor of magnetic moment to angular
    momentum of 4.2.

    What is it, therefore, that gives the triton the magnetic moment
    of 3 nuclear magnetons based on a single half-spin unit of angular
    momentum?

    The simple answer which is now suggested is that the triton comprises
    three nucleons two of which are protons and one of which is an
    antiproton. They all react magnetically in opposition to a magnetic
    field and so the two protons ‘spin’ one way and the antiproton spins the
    opposite way. The magnetic moments add to 3 units and the ‘spins’ add
    to a single half-spin unit of angular momentum.

    This then explains the magnetic moment property and, further, we
    have now an insight into the structure of the triton.

    The Structure of the Triton

    Once the structure of the triton has been pictured in our minds then
    we can proceed with the confirming analysis by calculating the triton’s
    mass discrepancy and its lifetime.

    The interesting feature seen already is that we have not pictured
    the triton as comprising one proton plus two neutrons. Keep in mind the
    no-neutron syndrome of cold fusion! Three protons will not hold
    together even in a quasi-stable aggregation. This is why physicists have
    taken the easy course and assumed that it consists on two neutrons plus
    one proton with some kind of glue that introduces a negative mass
    binding energy.

    Such assumption has led them down a blind alley. We need to add
    something such as beta-minus or beta-plus particles or be bold enough
    to imagine a stable entity including antiprotons. The truth can only
    be found by discovering the structure which gives the right answers for
    the three measured parameters presented above.

    Discovery in this pursuit needs inspiration and intuitive analysis
    and it is here that the author must lead the reader directly to the
    solution and then show how the calculated properties prove that it has
    to be the correct structure of the triton.

    The triton does, in fact, comprise two protons plus one
    antiproton, and our only concern now is to understand the ‘binding’
    that holds the three nucleons together but keep the proton and
    antiproton far enough apart so that they do not fuse and mutually
    annihilate one another.

    Now, here we are guided by the fact that independent analysis of
    the nature of the deuteron has shown that in its prevalent state it
    comprises two protons bound together by an intermediate beta-minus
    particle, otherwise termed a positron. This is fully explained in the
    previous reference, the author’s paper ‘The Theoretical Nature
    of the Neutron and the Deuteron’
    , Hadronic Journal, 9, pp. 129-136
    (1986). The less prevalent ground state comprises an in-line
    configuration of three positive beta particles separated by two
    antiprotons.

    We may be further guided by earlier work reported by the author in
    his book ‘Physics without Einstein’, published in 1969 by the author under
    the trade name Sabberton Publications.
    On pages 147-152 of that work there is a description of nuclear bonds,
    which the author termed chains, which took the form of an alternating
    sequence of beta-plus and beta-minus particles and which linked adjacent
    hole-cum-charge sites in the vacuum lattice which locked onto the atomic
    nucleus and caused it to form a shell structure. Indeed, this theme was
    further elaborated in the author’s paper entitled ‘The Chain Structure
    of the Nucleus’
    , published in 1974, also by same publisher.

    The data there presented show that a charged meson can attach
    itself to a charged nucleon to release sufficient energy to account
    for its own mass-energy and further the total energy of a chain spanning
    between two vacuum lattice hole-cum-charge sites. Furthermore, there
    is a balance of mass-energy or mass deficit which one calculates as
    being some 12 electron mass units.

    In these circumstances, and having regard to the fact that we are
    trying to account for a triton mass deficit of 11.54 electron units,
    the author sees no point in going further than the assertion that the
    triton has a single beta particle chain linking the antiproton and the
    proton pair, the latter regarded as being seated at an adjacent
    lattice site in the vacuum lattice system.

    The beta particle chains are deemed to be very much a part of the
    structure of large atomic nuclei. Each chain has up to 170 such
    particles corresponding to the fact that the vacuum lattice spacing
    is 108 times the beta particle radius. There are two of the author’s
    papers of easy reference as background to this subject. They are: ‘Aether Theory and the Fine Structure
    Constant
    , Physics Letters, 41A, pp. 423-424, (1972) and ‘Theoretical
    Evaluation of the Fine Structure Constant’
    , Physics Letters, 110A,
    pp. 113-115 (1985).

    As will be seen from those papers there is a factor 1843 derived
    from a resonance closest to a zero potential condition and representing
    the volume of a vacuum lattice charge in relation to a beta
    particle. Indeed, the derived value of the fine structure constant was
    given in the form:

    α-1 = 108π(8/1843)1/6 = 137.0359

    The fact that the space occupied by the vacuum lattice charge
    can, given enough energy input, develop into 1843 beta particles from
    which a proton form can condense is crucial to the creation of the
    nuclear chains, but the action of creation of a proton depends
    primarily upon the mu-mesons that do the work.

    The concept of space conservation in charge particle
    transmutations is consistent with energy conservation, bearing mind that
    the pressure or energy density within the charge of the vacuum lattice
    particle is in equilibrium with the ‘gas-type’ pressure set up by the mu-meson pairs that, on average, populate each cubic lattice cell of side
    dimension 108 beta-particle radii. Thus the number of beta particle
    charge volumes that equals this cube volume is a measure of a factor
    N which is relevant to the inverse chance of a ‘hit’ as the annihilation
    and random position recreation of a mu-meson recycles at the standard
    (Compton electron) frequency associated with vacuum energy charge pair
    creation activity.

    To evaluate some numbers, note that the lattice charge has a
    Thomson radius that is larger than the beta particle charge radius by
    a factor 12.26, which is the cube root of 1843. The energy of the
    lattice charge is therefore 1/(12.26) or 0.08156 electron units. The
    number of electron charge volumes in the unit cubic cell of the
    vacuum is (108)3 divided by 4/3 and so is 9,324,644. Dividing this by
    1843 we find that there are 5059.49 lattice charge volumes of energy
    0.08156 electron units in each cubic cell of the vacuum, which is
    412.666 electron mass units of energy. This is double a mass energy a
    little below 207, thereby representing the combined mass energy of a
    virtual mu-meson pair that is the energy in each cell.

    The fundamental derivation of the 108 cell dimension parameter
    and the 1843 factor, the subject of the author’s primary analysis of
    vacuum energy discussed in the above-referenced 1972 Physics Letters
    paper, therefore leads to the theoretical derivation of the mu-meson
    energy quantum. It tells us the energy content of the vacuum state.

    The triton, when created, lives amongst this activity and its
    rather special structure makes it vulnerable to decay owing to the
    bombardment by those mu-mesons. The core target for that bombardment
    is not the antiproton or the two proton nucleons in its composition.
    The target is the vacuum lattice charge to which the triton is attached.
    The deuteron, however, is also subject to such attack and here, too, the
    real target is a lattice particle in its near vicinity.

    An isolated proton or a deuteron does not need to develop a
    fixed association with a lattice charge because its mass has not exceeded
    a critical level above which the dynamic quantum ‘Zitterbewegung’
    behaviour needs a collective balance by a graviton system. The
    phenomenon of gravitation is dependent upon the inertial reaction of
    vacuum particles in the form of gravitons which have a mass-energy of
    2.587 GeV, an energy value having an effective mass between two and
    three proton masses. This is fully explained in the author’s works. See,
    for example, ‘The Theory of the Gravitation Constant’, Physics Essays,
    2, pp. 360-367 (1989).

    However, when the proton or deuteron is part of a water molecule
    the nuclear chain structure of the oxygen atoms will provide the
    lattice location in the vacuum field system. This is why the cold
    fusion events we see with free deuterons in a palladium host metal are
    not, so far as we can judge, occurring in water.

    When atomic nuclei exceed the mass of two protons they do, of
    necessity, share in a collective action requiring dynamic balance by a
    multiple graviton system and that action requires that their
    combination as a structured nuclear entity spreads itself over a
    multiplicity of vacuum lattice sites. The triton, therefore, has to
    have a nuclear beta particle chain able to bridge two lattice sites
    and it probably has two protons in close proximity that straddle the
    lattice charge of one site whereas the antiproton nucleon constituent
    is seated at the other lattice charge site. Tritium is, of course,
    radioactive whether in the molecular stucture of water or not and so
    it warrants respect and caution from a health viewpoint.

    The Triton Lifetime

    This structure already discussed now leads us to the calculation
    of the decay property of the triton. To proceed we restate part of
    the commentary in the introduction.

    In order to set up the nuclear bond in the form of a chain of
    beta particles a meson charge has to develop as a charge attracted to
    the proton. This meson charge is termed a Q charge and its energy is that
    of the unit cell energy, approximately 413 electrons as already
    explained. Two opposite polarity charges e, having energy E in
    electron units represented by P and Q and conforming with the J. J.
    Thomson formula:

    E = 2e2/3a,

    where a is charge radius, will, when attracted so as to be in surface
    contact at their charge radii, have a combined energy E’ which is given
    by:

    E’ = P + Q – 3PQ/2(P+Q)

    This formula is basic to proton creation and was mentioned by the
    author in Physics Today, 37, p. 15 (1984), so we are not introducing
    something new at this stage in developing the theory of the triton.

    In fact P and Q are in equilibrium as an optimum energy condition
    for which the negative term is a maximum when P is 1836 and Q is 413.

    The point of interest is that E’ can be calculated to be 92.7
    electron mass units below the value of P.

    In other words, given that there are two protons well separated
    by the diameter of the vacuum lattice charge (or a beta particle in the
    case of a deuteron), we can see how such a system, which features in the
    triton composition, can deploy twice the energy of 92.7 electron mass
    units to assist in a nuclear transmutation. This sums to 185.4
    electron mass units.

    We then note that the stimulus of 4 pairs of virtual mu-mesons,
    each of 412.7 electron mass units will suffice with the 185.4 electron
    mass units to create a proton of 1836 electron mass units. In fact,
    the energy equation is rigorous in providing exactly the amount of
    energy needed, which is why the decay of a triton yields so little energy
    that the result has remained a puzzle to scientists.

    The scenario of interest is then the action by which the triton can
    be the seat of a process by which a proton is created within the triton
    itself so as to force a transmutation.

    The condition we are considering is a coincidence event when 8 mu-mesons hit the lattice charge in the same vacuum cycle. If the result
    is the creation of a proton then the recovery of the equilibrium of the
    vacuum/matter interaction will involve the demise of a proton in
    matter nearby.

    The task in determining triton lifetime is simply that of determining
    proton creation probability in a vacuum lattice site charge within
    matter.

    Proton Creation Probability

    As already shown, it takes 8 virtual muons to trigger the action
    leading to the creation of a proton. The question is how to bring 8
    muons together for this purpose. There is an active virtual muon pair
    in each cell of the vacuum medium, that is for each lattice charge (-e), the latter being neutralized, so far as we can sense in our matter
    frame, by a positive continuum background.

    If the positive virtual muon μ+ enters the lattice charge it will
    momentarily, in the relevant action cycle, render that charge neutral
    by converting it to some neutral paired charge form. Therefore, to
    get 8 muon energy quanta to combine in some way, we need to have 8
    lattice charges in close proximity in a state in which either all are
    transiently neutral or, alternatively, 7 are neutral and one is
    charged to a double unit level, as by being transiently primed by the
    addition of μ.

    Now, the chances of one lattice charge being primed by either muon
    in its cell are 2 in 5059. There are 256 combinations of chance
    simultaneous priming of 8 such lattice charges in each action cycle.
    The follwing tabulation shows the virtual muon polarity combinations as distributed
    amongst the various mixed states.

    Only the first two entries under S in this table represent states that can
    satisfy the merger requirements by creating neutral energy quanta with
    a single nucleating charge. Thus there are 9 chances in the 256 for the
    conditions to meet the proton creation trigger requirement. In other
    words, in every action cycle at the Compton electron frequency we have
    9 chances in (5059)8 of proton creation referenced on a particular
    lattice charge.

    S .. μ+ . μ

    1 … 8 … 0

    8 … 7 … 1

    28 .. 6 … 2

    56 .. 5 … 3

    70 .. 4 … 4

    56 .. 3 … 5

    28 .. 2 … 6

    8 … 1 … 7

    1 … 0 … 8

    This gives us a ‘lifetime’ in the sense that the attempt to create
    a proton can influence a decay process which sheds a proton, as already
    explained.

    That lifetime is:

    (5059)8/9(1.235×1020) seconds or 12.2 years

    The mean lifetime reported for the triton is 12 years and so this
    result is a quite remarkable application of the author’s theory.

    Discussion

    Given the above solution to the mysteries of triton decay, it needs
    little imagination to probe the possibility that a deuteron, in its
    prevalent state, as two protons sitting on diametrically opposed sides
    of a central beta-minus particle, could become subject to the
    stability of a nearby vacuum lattice charge and experience similar
    proton infusion. In this case, the deuteron would become a triton,
    whereas in the triton the proton infusion into the two-proton component
    destroys the beta particle nuclear chain and severs the link with the
    antiproton component, which thereby becomes involved in a decay which
    replenishes the virtual mu-meson population of the vacuum.

    The deuteron proton infusion process would be accompanied by the
    demise of a proton elsewhere, but what we would see with two deuterons
    in close proximity would appear to be one deuteron shedding a proton
    and a beta minus particle and the other deuteron acquiring a proton and
    shedding a beta plus particle, which overall amounts to an act of
    fusion. Two deuterons merge to create a proton and a triton by
    shedding energy as the two beta particles annihilate one another.

    To account for the nucleation of the Q charge forms the less
    prevalent deuteron ground state composition having five component
    charges is the best basis for the transmutation under discussion. The
    central beta particle binds the two proton forms whilst the outer beta
    particles transform into Q charges to release the extra energy needed
    to convert the 8 mu-mesons entering the lattice charge target into a
    proton.

    One can develop this theme by investigating the expected excess heat
    generation rate that could come from the 12 year decay rate for the
    deuteron ground state and one may further wonder how that process might
    be accelerated.

    However, the main conclusion reached in this work is that there is
    basis for understanding the cold fusion reaction and the focal issue here
    is the interpretation of the process by which the triton is naturally
    radioactive at room temperatures. It is believed that the account
    presented here will help with that understanding.

    APPENDIX D

    [This is the author’s paper The Theory of the Proton Constants which can be seen as reference 1988b on this website.

    APPENDIX E

    [This is the author’s paper The Neutron and the Deuteron which can be seen as reference 1986d on this website.]
  • ENERGY SCIENCE REPORT NO. 3

    ENERGY SCIENCE REPORT NO. 3

    PART I

    The ‘Impossible’ Dream?

    I am going to describe something which physicists will declare to be ‘impossible’ and then I am going to explain in simple scientific terms why I believe many of those physicists will see the need to retract and pay attention.

    I am a physicist myself, by professional qualification and by vocation, with an academic research background in electrical engineering, though my working career has been in that field of corporate management concerned with inventions and patents dealing with technology rooted in electronics and magnetism. I would not therefore be writing the words which follow without being sure of my ground.

    Imagine that in your dreams you catch a glimpse of a house of the mid 21st century and notice that in its cellar there is a rather curious elongated structure that you can best describe as pipework. You suspect it is a heater or air-conditioning unit. The owner of the house explains its function and how it works.

    Firstly, the main pipe section is composed of steel or nickel, according to whether the air flow through it is hotter or colder than the exposed outer surface of the pipe. Peripheral to the pipe is a unit described as a pump. It is a heat pump, a mid-21st century model. It does what heat pumps are supposed to do and does it rather well. It is quiet and efficient. It requires an input of electricity and it pumps heat. It can pump 100 joules of heat between two temperatures, one ambient and one 30 degrees C above or below ambient, and do that with an intake of 15 joules of electricity. Physicists know that is possible because it fits within the Carnot efficiency limitations imposed on heat engines and knowledge of 19th century technology explains how it works. It is very familiar territory to those expert in thermodynamics.

    Now, about that pipe. Here again, in history, going now into the latter part of the 19th century, there was the discovery of a phenomenon by which heat flow through a metal could generate electricity. Assuming the heat flow was through the walls of a long pipe, it needed a magnetic field directed along the length of the pipe to promote the setting up of an electric field, or EMF (electromotive force, otherwise expressed as a voltage), the latter being directed at right angles to both the magnetic field and the direction of heat flow.

    So, you see, if we magnetize the pipe along its length any heat flowing through its pipe walls, that is between its outer surface and its inner bore, will set up circulating electric current inside the pipe.

    Now, if you use your imagination, you can see that here is something physicists discovered over one hundred years ago (in 1886), a fascinating way in which to convert heat into electricity, but if you, the reader, are a physicist, have you ever heard of this before?

    I know about it because it was mentioned at page 592 of a book I was given on 20th December 1945 because I had won that year’s `Physics Prize’ awarded at my school. The few words on that page 592 did not refer to tubes of circular section. They referred to experiments on metal sheets, but I knew how to roll a metal sheet to form a tubular pipe and so here was a book I have had in my possession for more than 50 years and it told me that it had been known for some 60 years before then that heat could convert directly into electricity merely by flowing through metal!

    Of course, now assuming you, the reader, are an engineer, you will say that electric current flowing in the pipe is not doing anything but turning back into heat, so there is a no-win situation. On the other hand, if you are a physicist, you will say that it is a thermoelectric effect and such effects are notoriously very weak and so can offer nothing of practical importance.

    However, that book of mine that I won in 1945 included a table of data, backed by references, which showed that at the dawn of the 20th century it was known that in steel as much as 16.6 volts could be set up by a temperature gradient of one degree C per cm if the magnetic field strength was 10,000 gauss. That magnetic field is less than half of the saturation field strength in steel. In nickel the direction of the electric field is reversed, but the voltage induced can be as high as 35.5 volts in such a field, though that is nearly double the saturation condition of nickel. Either way, whether we use steel or nickel, we are involved here with the prospect of generating 16.6 volts per cm. of path within a steel or nickel pipe magnetized close to saturation, given a heat flow rate through the section of pipe corresponding to a one degree of temperature drop per cm.

    It never occurred to me when I first came to browse through that book given to me as a school prize that it contained technological information of such importance to the world’s energy future. Instead, I became indoctrinated in the principles governing physics, which say that it is impossible to convert heat into useful work except in compliance with the laws of thermodynamics. A pipe with one degree temperature difference between its outside and inside could not be even one per cent efficient in converting heat into electricity according to those ‘laws’.

    I am now a wiser being and it took me nearly half a century to acquire that wisdom and become a law breaker. One cannot argue with the facts of Nature and it was to be experimental discovery that obliged me to shift my ground, realising that Nature herself has not seen fit to comply with the wishes of whoever decided to formulate that Second Law of Thermodynamics. You see, when heat flows through metal it is carried by electrons in the main and some is transferred by atoms vibrating into one another. Those electrons, which are shed by atoms, get deflected by a magnetic field and can get reabsorbed into other atoms part way along their journey through the metal. Once locked inside an atom the electron can even migrate back in the opposite direction to the heat flow. After all, there is no net current flow in the heat transport direction, so the electrons have to go the other way too. However, in the latter motion they are paired with an atomic nucleus and so the magnetic forces acting on them are unable to move charge laterally with respect to magnetic field and heat flow.

    It is as if the electrons are girls in a barn dance and can go one way freely but, in migrating the other way, they have to hold hands in swinging around a boy, but they can progress from boy to boy transferring their hand hold. In their forward free motion they do not bang into the wall at the end of the barn, because some boy or other captures them and sets them into the sequence of their reverse motion.

    So those heat-carrying electrons seldom travel all the way through the pipe section before shedding their heat and producing that lateral EMF. To comply with the Second Law of Thermodynamics they all need to go from the higher temperature towards the lower temperature at the pipe surface, but they are not completing that journey. They never ‘see’ the lower temperature. Instead, they are deflected to confront a back EMF and it absorbs their energy very efficiently, indeed with a near-to-100% efficiency. There is no such thing here as the physicists `Carnot’ criterion. All there is is a temperature gradient governing the heat flow rate.

    So our mid 21st century homes will have pipework in their basements which use nickel or steel pipes and be designed to have heat flowing through their pipe walls. How then do we provide that heat input? Well, you have been given the answer to that question, we use the heat pump accessory already mentioned. We feed in 15 joules of electricity to generate 100 joules of heat flow through the pipe walls and we convert most of that 100 joules into electricity which we use to supply the 15 joules input and to deliver the rest to meet our domestic needs. There is energy conservation because there is net cooling in that basement appliance and we will need to let heat flow in from the atmosphere somehow to keep the balance, but the result is a cold basement and a warm house or an air-conditioned cool house with a hotter climatic condition outside.

    If you see this as `perpetual motion’ and so ‘impossible’ then stay in the 20th century and skip the future, because you choose to ignore what Nature has on offer.

    If you are shaking your head, as an engineer, and still thinking about how that electric current circulating around the tubular pipe form can get out and into your wire circuits then read on.

    First, let me go back to those school days of mine once again. I was taught physics at a time before our student-time absorbing computer age began and so could learn a little more about old-fashioned subjects, such as ships having magnetic compasses and why ships needed `degaussing’. I want you to imagine that steel pipe mentioned above as being, in effect, a ship structure during fabrication. I was told that when a ship was made it would tend to become magnetized because of the hammering and rivetting on its steel plates which vibrated the magnetism in the steel to cause it to turn into the direction of the Earth’s own magnetic field. The ship became weakly magnetized, enough to affect the reading of a ship’s compass and that magnetism could not be eliminated. One had to compensate for it in some way by putting magnets close to the compass. Furthermore, my school years of learning physics being World War II years, I was told that the magnetism of a ship could attract magnetic mines set floating on the sea by the enemy and so the ship had to be `degaussed’ by using currents flowing around parts of the vessel.

    Yet, when I took up my Ph.D. research (1950-53) on the subject of how there were anomalous excess losses occurring in electrical sheet steels as used in power transformers, I directed much of my attention to examining how mechanical stress affected those losses. I can say, quite categorically, that my research experience assured me that mechanical stress and vibrant stress would reduce, rather than increase, residual magnetism. However, I was not sufficiently enlightened as to question what I had been taught and it was of no importance to me how ships became magnetized during construction. Indeed, nor was I sufficiently enlightened at that time as to see the connection between my research and page 592 of that school book I mentioned.

    I went through three years of Ph.D. research on the subject of anomalous eddy-current loss in electrical steel, losses involving heat generation, without it occurring to me or the professorial supervision I received, that heat itself could be a regenerative electrical factor in enhancing those losses.

    There were two dominant factors, not mentioned as such, but inherent in our instinct by training, namely that in the absence of both a bimetallic structure and a temperature differential there could be no regenerative effects and, further, that any strengthening of a magnetic field meant enhancement of resistivity. As a result the whole emphasis of interest was centred on how waveforms were distorted during the alternating cycles of magnetization, owing essentially to non-uniformities attributable to the domain structure inside a magnetic material.

    Here the lay reader should understand that inside nickel or iron there are everywhere regions fully magnetized to saturation and all we do when we `magnetize’ is to turn some of those around in their direction of polarization. This is why it makes sense to imagine that vibration can produce magnetization.

    However, I am now suggesting here that the ship lying in its fixed position relative to the Earth’s magnetic field during its construction is far more susceptible to the effects of temperature changes than to workmen’s percussion tools banging on the ship’s bodywork. Heat flow through its steel plates, transversely with respect to the Earth’s magnetic field, will set up current flow around the body section of the ship. This will itself set up a magnetizing field along the length of the ship, one way as the ambient conditions warm up and the other way as they cool down. Each day there is a cycle of change and it becomes interesting to ask if this thermal cyclic can build up the gradual magnetic polarization of the vessel.

    At this point I will jump way ahead to refer to our invention, the Strachan-Aspden thermoelectric device. It was demonstrated repeatedly by using ice to cool its working surface and deliver electrical output and then by putting in electrical power to show the equally-amazing rapid regeneration of ice on that same surface. Internally the device operated by electrical current oscillations at quite high frequencies but these were not oscillations of the magnetized state. However, by its nature the device would be subject to some magnetic changes as the temperature cycled. Now presumably it takes hundreds of days of climatic heating and cooling before a ship acquires its full measure of magnetization. Equally, it would seem that a hundred or so sequential cycles of heating and cooling between the temperature of ice and a warm room occur before the evident deterioration of the operation of the Strachan-Aspden device is registered. Accordingly, I make the observation that it would seem that the thermal cycling is a factor which polarizes its magnetic state. Comprising, as it does, a thin film ferromagnetic material (nickel), its full polarization would destroy its operability. The principle on which it works requires the domains in each nickel layer to be fairly equally apportioned as between one orientation and the opposite orientation, because the transverse current flow, which is a.c. seeks passage through one or other form of polarized domain according to its flow direction. It chooses the one offering negative resistance and if such passage is denied then there is no regenerative conversion and simply loss.

    By considering the problem of ship’s magnetism one can then understand our problem with the Strachan-Aspden device, the subject of Energy Science Report No. 2 and Part II of this Report No. 3. It needs little imagination to see that, just as for the ship, we simply need to provide for that `degaussing’ process. How to implement this effectively will depend upon the specific assembly plan for the main thermoelectric devices, which will need some adaptation to incorporate the controlled degaussing feature. In the development stage, if not in the final product, the incorporation of some diagnostic sensing circuitry which can be used to monitor the unwanted polarization will be necessary to provided the feedback control which keeps the device in a healthy state. Such issues are matters for consultation with commercial developers who decide to exploit this thermoelectric technology and will not be addressed in this summary Report.

    However, concentrating on the underlying function of the thermoelectric power generator, let us go back to our mid-21st century house and that pipework in the basement. We still need to explain how it produces electricity that we can extract and use.

    One initial question that will interest some readers is how this topic relates to that reference I made to something that eluded me in my years of Ph.D. research. I was researching the question of why eddy-currents in electrical sheet steels could generate as much as six times the loss expected from accepted theory. It did not occur to me that heat flow from the steel laminations could regenerate EMFs that would cause the current to be far greater than the value determined by resistance and normal Faraday induction.

    My research clearly demonstrated that the loss anomaly was progressively eliminated as the steel became more and more polarized. By this I mean the ratio of the actual eddy-current loss to the theoretical loss. In other words, it was the fact that the magnetic domain regions in my transformer steel provided a optional current path through the metal, one that is obstructive for thermal reasons and one that aids current flow, also for thermal reasons, that created the anomaly.

    It vanished once those domains had been polarized so as to eliminate enough of those that aided current flow.

    So, if our mid-21st century basement air-conditioning unit is to generate a net output of electrical power, we must avoid that regenerative eddy-current syndrome in its pipework.

    We do that by laminating the pipe assembly and avoiding its closed conductive sectional form and the laminations are provided, not because we seek to use alternating magnetic induction, as in a power transformer, but rather because we want to set up the non-linear thermal gradient and avoid a mismatch of thermally-induced EMFs which would otherwise promote unwanted current circulation. The circumferential magnetic field effects are thwarted by introducing a break in the circuit path and tapping off the current flow by diverting into a battery which becomes charged or into a load circuit such as a motor or an electric heater in another room.

    Hopefully, given a modest pipe radius plus a high enough heat throughput rate we should be able to develop a normal cell voltage for this purpose. Given a steel pipe of radius 5 cm. and wall thickness 1 cm. with a 20,000 gauss magnetic flux density along its length, one degree C of temperature drop between its inner and outer surfaces corresponds to a thermal induction of 1,000 V. That, at least, is the theoretical result using empirical data for the relevant thermoelectric coefficient as listed in that book mentioned above. I have verified the source data by tracking back to the original research reference [H. Zahn, Ann. der Phys., 14, p. 886 (1904) and 16, p. 148 (1905)]. Of course, working with a 30 degree C air temperature difference, it is unlikely that even a one degree C drop of temperature through metal can be set up, owing to the limited heat transfer rate across the pipe surface, but one fiftieth of this seems not unreasonable in a thick-walled pipe giving 20 volts output from each pipe section. Six sections connected in series would deliver 120 V d.c.

    Such, at least, is the prospect ahead and I would urge interest in this subject by those having a corporate interest in developing new technology for the world’s energy needs of tomorrow.

    Now, the above pages have been written without referring to any supporting illustrations because I wanted the message in my words to register. The diagrams now follow and then the remainder of this Part I discourse will deal with two separate topics. The second of these is more specifically concerned with the use of bimetallic thin film layered structures of the kind which featured in the main Strachan-Aspden invention, the subject of US Patent No. 5,288,336. The other topic deals with a theme which has been left aside so far, namely the subject of the secondary, in fact the first Strachan-Aspden invention, as disclosed in US Patent No. 5,065,085. The latter invention will be addressed first, but after the pictorial review.

    Review of the Nernst Effect

    Referring to Fig. 1, if heat flow in a metal is depicted by the wavy line, and dT/dz represents temperature gradient, with B representing a magnetic field in the x direction, then there is an electric field of strength E in the mutually orthogonal y direction, given by:

    E = N(B)dT/dz

    Here N is the Nernst coefficient and it may be positive or negative, according to the choice of metal. In a sense this phenomenon is akin to the better-known Hall Effect, where a field E is generated in the y direction by the passage of an electric current in the z direction, given a magnetic field B in the x direction.

    In the Hall Effect we know that the power transferred to that E field arises because the current overcomes a back EMF, the magnetic field B being a mere deflecting agency, just as a railroad track deflects a locomotive but does no work itself. Energy is conserved at all times.

    However, in the case of the Nernst Effect, although there is also energy conservation, we are setting up an electric field which can deliver electrical power output and the heat input is the only energy source available. The Hall Effect, apart from some small resistance loss, is 100% efficient and, since the Nernst Effect is concerned with temperature gradient, rather than absolute temperature, we have a straight analogy with the Hall effect and so can expect that near-to-100% conversion efficiency.

    There are only two problems. One is understanding why electricity is generated with no electric current in the heat flow direction and the other is in devising a physical structure that will let heat flow one way while we take off electric current delivered by that E field in a direction at right angles to the heat flow. Heat conduction and electrical conduction tend to share a common path!

    To resolve the first problem, note that it is well accepted that most of the heat conducted through a metal is carried by electrons. The magnetic field will surely not act on ‘heat’ as such. It asserts forces on the flow of free electrons carrying that heat. Note that I have used the word `free’, because electrons can move through metal in two ways. They can travel freely with little restraint or they can migrate as members of the electron families seated in the outermost shell of the atoms comprising the metal. The electrons in the latter state will, as they move from atom to atom, be subjected to the usual deflecting magnetic forces, but they are held in their quantum states by the succession of atoms in their path and those forces so far as they arise from their transfer from atom to atom are thereby absorbed by the crystals forming the solid body of the metal. Imagine, therefore, that the free electrons transport heat in the z direction and that that heat has the form of kinetic energy which is used, upon deflection in the B field, to stack the electrons up sideways in the x direction against the back EMF of the resulting E field. There will be some back EMF set up in the z direction as well and this must encourage the bound atomic electrons to migrate from atom to atom against the z-direction heat flow. The net result is the slowing down of the electrons carrying heat and the transfer of that heat energy into electric potential that allows the E field to deliver power.

    The full line (curved, with arrow) in Fig. 2 depicts the free electron path, transverse to the E field (linear in direction of arrow) and the B field (direction normal to the page). The electron flow can be arrested as the electrons are reabsorbed by the atoms and then, as they belong to overlapping electron shells of adjacent atoms, they can migrate back to their starting point. There is no current passing through the metal, shown in Fig. 2 as a solid layer located between and electrically, but not thermally, insulated from the faces of two heat sink plates. By applying different temperatures to the two plates there is a resulting temperature gradient in the metal and, given the B field, the E field follows as a consequence of the Nernst Effect. The positive or negative polarity of the Nernst coefficient poses an interesting question, but its answer need be no more mysterious than the orthodox `positive holes’ as referred to in the theory of semiconductors. It is simply a question of mobility of the charge carriers and understanding the quantum-electrodynamic attributes of electrons in motion. It suffices here to let Fig. 3 serve as a guide as to how `bound’ electrons can migrate through a metal without developing an E field. They are moving around the orbits in their atomic shells and they are distributed in energy bands which govern their relative freedom. From the statistical mix of their activity in transporting heat and the building of concentrations of free electrons which set up the E field effects, they somehow contrive to reveal to us the phenomenon which is termed the ‘Nernst Effect’.

    Unfortunately, technologists have not exploited this phenomenon, even though it has enormous practical potential. The reason is two-fold. Firstly, they are given scientific training which says that the second law of thermodynamics reigns supreme and, secondly, as engineers, they seem to have lacked the necessary imagination. I am mindful that it was not in a book on physics or one on engineering that I saw this subject properly addressed. It was a book on ‘Physical Chemistry’ written by Walter J. Moore, Professor of Chemistry at Indiana University. My copy was the third edition published in Great Britain in 1956 by Longmans, Green & Co. Ltd., but the original 1950 edition was published by Prentice-Hall Inc., New York.

    The words which I now quote were on p. 85:

    “The laws of thermodynamics are inductive in character. They are broad generalizations having an experimental basis in certain human frustrations. Our failure to invent a perpetual-motion machine has led us to postulate the First Law of Thermodynamics. Our failure ever to observe a spontaneous flow of heat from a cold to a hotter body or to obtain perpetual motion of the second kind* has led to the statement of the Second Law. The Third Law of Thermodynamics can be based on our failure to attain the absolute zero of temperature.”

    Note that the laws are not ‘the word of God’, but a consequence of man’s experience and frustration at trying to replicate something following God’s example, because the creation of the universe introduced perpetual motion into our environment and, indeed, in our own composition, a system of atoms, each of which involves electrons kept in a state of motion, even when the atom as a whole comes to rest at that supposedly non-achievable zero temperature on the absolute scale.

    Having mentioned ‘God’, I should say that I believe we can only go so far in our understanding of God’s creation as to reach the point where we face questions such as “What is space?”, “What is energy?”, “What set that energy in motion in space?”, “What came before?” and “What follows as destiny?” The human race is a life-form on one planet amongst the numerous astronomical bodies forming that universe and our immediate concern is survival based on a deeper understanding of the energy conversion processes governed by those laws of thermodynamics. However, if those man-made laws are wrong then we need to revise them before we finally accept the inevitable decline in our energy fortunes.

    Although Professor Moore said that the failure to invent a perpetual motion machine had meant human frustration and resulted in the First Law of Thermodynamics, I must ask how Professor Moore would view the ‘Moving Sculpture’ which we are told was seen on Norwegian TV as an exhibit by its creator Reidar Finsrud. It is mentioned in the July 1996 issue of the Utah publication `New Energy News’.

    ____________________________________________________________________________
    * ‘Perpetual motion of the second kind’ is the continuous extraction of useful work from the heat of our environment, whereas ‘perpetual motion of the first kind’ is the production of work from nothing at all.
    ____________________________________________________________________________

    A steel ball weighing 2 lb. runs around a 25″ circular aluminium track and rolls towards the pole faces of a horseshoe magnet suspended by a lever and pivot system just above the track and ahead of the approaching ball. As the ball gets close to the magnet it encounters a ramp linked to that lever system and the weight of the ball in riding over the ramp displaces it downwards to lift the magnet sufficiently so that the ball can roll on and pass underneath it. The magnet imparts a forward drive force as it attracts the ball and it needs less energy to lift the magnet clear of the ball than is gained from the magnet when in its lowered position.

    The report states “A working model has been running for over one month in full public view in Norway”.

    Now, given that this is a genuine account of a real machine, we confront the reality of perpetual motion. On the face of it, the First Law of Thermodynamics has been disproved, but I remind the reader of those words above from that 1956 edition of the book by Professor Moore, “The laws of thermodynamics are inductive in character.” What that means is that the laws are not `proved’ 100%, because they are worded so generally as to extend far beyond the immediate experimental circumstances which have been taken as their basis, but that one seems able to predict from them what may happen if applied to hitherto untested circumstances. Now, from that year 1956, I have been declaring to whoever was willing to listen that the setting up of a magnetic field by the process we call `induction’ sheds energy into space (even vacuous space) as heat, where it is dispersed by merger with the omnipresent vacuum energy activity of the aether. I have urged recognition that the aether becomes polarized by reacting and so is conditioned to shed its own energy when we demagnetize that field. So, if you regard that aether energy as heat, whether at zero temperature absolute or at 2.7 Kelvin, the cosmic background temperature of space, then you can see how `thermodynamics’ gets into the act.

    In the latter case the inductive process is a mysterious exercise of influence by an electrical current in a circuit, as it somehow affects energy transfer across space to where a secondary circuit is located. My inductive powers then tell me that, for energy to go from A to B via empty space C, I cannot say space is empty if energy is to be conserved according to the first law of thermodynamics. Then, if space is not empty, since space is open terrain not assigned to the exclusive use of the energy source at A, I ask if that space could pool energy in transit from a multiplicity of sources. In that case I would hesitate before ridiculing the possibility that energy shed by A may arrive at B supplemented by an excess of thermodynamic energy drawn from C.

    Now, if you do not like to think about ‘space’ in this context and you choose to regard the ‘aether’ as non-existent, then that process of electromagnetic induction means an exchange of energy with something that does not exist and so the process cannot occur, according to the First Law of Thermodynamics as a statement of ‘perpetual motion of the first kind’. Yet, we have built electrical technology on the discovery of electromagnetic induction, building on the recorded experience of Michael Faraday.

    If you choose to regard the ‘aether’ as existing but describe it as a ‘field’ then you are playing with words and have defined ‘aether’ as something other than simply an ‘energy medium’. In that case would you say that a ‘field’ has a ‘temperature’? It is difficult to see where `thermodynamics’ comes into play unless we have heat. Physicists speak of ‘entropy’, which is a word expressing something far more mysterious than what I understand by the word ‘aether’. Entropy is heat degraded by temperature, it being Q/T, a quantity of heat Q divided by temperature T. We shed energy into ‘empty’ space by heat radiation and we say that entropy always increases, but yet we do not say that that ‘emptiness of space’ has a temperature. In other words, most scientists who refer to thermodynamics and entropy, really do not know what they are talking about and, certainly, they could never believe that the Norwegian ‘Moving Sculpture’ mentioned above is anything other than a trick aimed to deceive.

    When one then considers the Second Law of Thermodynamics there is even greater confusion, because one needs two temperatures to account for heat flowing to a greater entropy state as it converts into useful work, but always needing to find a cooler destiny. Does that Norwegian perpetual motion machine run on heat? It cannot, according to the Second Law of Thermodynamics, unless we feed in some heat at a temperature higher than ambient.

    So, what can the Third Law of Thermodynamics tell us? We can never attain absolute zero of temperature! Well, why should we want to do that anyway and, if we did, how far might we get? Professor Moore in that book which dates back more than forty years states:

    “In 1950, workers in Leiden reached a temperature of 0.0014 Kelvin.”

    They did that by a process of demagnetization. In telling this story Professor Moore reaches his conclusion which is that:

    “The Third Law of Thermodynamics will, therefore, be postulated as follows: It is impossible by any procedure, no matter how idealized, to reduce the temperature of any system to the absolute zero in a finite number of operations.”

    Yet, I recall a recent mention of researchers at M.I.T. having achieved a temperature that was a low as a few billionths Kelvin, and so presume that, whatever purpose there was in devising the Third Law, it is hardly important technologically.

    Now, I really have little patience with scientists who tell me I cannot do something or other owing to one or other of the laws of thermodynamics. Each of those laws involves `small print’ and needs scrutiny to see what is meant by the `let-out’ clauses. In enforcing those laws one has always to adapt their interpretation to the `case law’ on which they were founded, the experience of the past. However, we must accept that we are inevitably destined to experience new discoveries as technology advances.

    To come to the point about what is disclosed above by reference to Figs. 2 and 3, does an electron in motion have a temperature and is that temperature different from that of the metal conductor in which the electron transports heat? Scientists can refer to Fermi energies and temperatures of electrons that can run into millions on the centigrade scale but they will not accept the possibility of using electron flow conveying heat in metal as a means for breaching the laws of thermodynamics. This Report faces up to that issue and challenges the Second Law of Thermodynamics by the facts of experiment, just as that `Moving Sculpture’ in Norway challenges the First Law of Thermodynamics unless one sees the aether as an energy source in its own right. There seems no point in challenging the Third Law of Thermodynamics, because, as it is worded above, it merely says that there is a final line to be drawn between what is measurable in terms of temperature and what is not and that we can only reach that line by taking one step at a time.

    An atom, the centre of mass of which is at rest, has zero temperature, even though its component electrons and nucleus remain in motion. A free electron moving through a metal at what is virtually zero absolute temperature, might still be said to have a temperature. As such it has the capacity to do useful work.

    If we can extract some of its energy by slowing it down and it can get recharged by being drawn periodically into the quantum world of an absorbing atom, then there is scope for technological advantage which breaches the laws of thermodynamics. The free electron can be deflected by a magnetic field. It can act in concert with other free electrons to set up mutual inductance electromagnetically, which is a thermodynamic process shedding heat. Our task is to see how we can use magnetism to our advantage and, though Report No. 9 in this Energy Science series concerns tapping energy from the aether, our horizon in this Report No. 3 is more modest in seeking only to tap the ambient heat resource of our environment. We will defy the Second Law of Thermodynamics, but do so by harnessing a phenomenon discovered by Nernst, whose name is closely associated with that Third Law of Thermodynamics. However, we will not be doing anything in breach of that Third Law. Experiments using the Strachan-Aspden devices already tested show cooling to minus 40 degrees C from an environmental temperature of a normal laboratory and that is sufficient to challenge the Second Law of Thermodynamics and gave basis for useful technology.

    Unfortunately, I cannot see a way in which to build a ‘Moving Sculpture’ which can demonstrate this process, but it may help to portray something along the following lines. Refer to Fig. 4. Imagine that steel ball to run down the slight incline of a straight track, where the incline represents a temperature gradient. Then suppose the track is curved through a right-angle at a low level so as to deflect the ball and cause it use its kinetic energy by rolling up a steeper but short incline. The ball represents the electron and the deflecting track represents the action of a magnetic field.

    Now, if the ball is set free at the top of the main track and given a starting velocity it will have enough energy, not only to reach the same height in climbing the branching track, but it will crash into other such balls that got there ahead and try to push them out of the way. Happily we provide a power-driven conveyor system by which a ball reaching that position can be carried back to the start position before being released again to start the ball rolling once more. That power-driven conveyor system is shown in Fig. 4. The conveyor is the zero-point activity of that microscopic quantum world of electron motion in the bound states within the atoms forming the metal conductor.

    The balls will circulate around that system and allow us to extract some of their energy just before they board that conveyor system, just as surely as conduction electrons are set free from host atoms of a metal and are reabsorbed by those atoms, even though they have spent some of their energy. This is an ongoing process whether or not the metal has a uniform temperature, but we need to set up a temperature differential (T to T* in Fig 4) so as to create that track which allows the applied magnetic field to serve as a deflecting influence. The temperature difference is needed to initiate the electron flow which allows us to relate this to what is shown in Fig. 1.

    In referring to the way in which electrons have two ways of spending their time, one where they roam free in gliding past atoms as they wander through a conductor and one where they have found lodging inside at atom, we are discussing something real and active in the world of physics. This is not imagination. It is a physical system in which energy exchanges processes occur on an ongoing basis without suffering any restraints imposed by the laws of thermodynamics. Why, one may wonder, is there not a law of thermodynamics which declares that a body at a uniform temperature cannot sustain activity in which there is any ongoing changes of state as between its component parts? Such a law would not be contrary to our experience of what we see in our environment. All we can `see’ inside the microdomains of a solid metal conductor using electron microscopes and the like is evidence of a crystal structure and a state of order, but yet we have not adopted a law of thermodynamics which says that there is no ongoing activity exchanging energy states in that conductor.

    Without such a law I am free to ask what happens when an atom with a single vacancy in its electron ‘lodging’ capacity moving one way collides with an electron moving the opposite way and absorbs it. Obviously, it becomes a neutral non-ionized particle of matter and we know from Newton’s laws of mechanics how to interpret momentum and the resulting energy deployment. However, Newtonian mechanics also do not tell us what happens to the magnetic inductance energy that the system had before the collision, owing to opposite polarity electric charge travelling in opposite directions, but yet does not have immediately after that collision.

    Suppose, just for the sake of argument, that the atom before collision had the same mass as the electron before collision, then Newtonian theory tells us that both particles could come to rest momentarily before separating again by moving in opposite directions with the same relative velocity. We have then full conservation of energy because the net electric current has reversed direction and so the self-inductance energy of this two-charge system is unchanged.

    Go further and allow for the atom having a normal mass much greater than the electron. Now, even with the electron having high speeds governed by the Fermi-Dirac statistics, the speeds of the atoms at normal room temperature will inevitably result in energy being transferred to those electrons by those collisions. Heat latent in the motion of atoms will transfer to the electrons, but the effect of current and the field reaction associated with inductance can play a role which ensures that the energy added to the electrons is not lost as heat but deployed in motion that sustains the overall level of that current. In short, there can be a superconductive state owing to the regeneration of spent heat as it converts into electricity which can be harnessed.

    Therefore, as we see superconductivity develop and come into use in room temperature applications, so we will have another route available for breaching the accepted laws of thermodynamics, but the immediate task is to describe the technology which exploits the Nernst Effect.

    The First Strachan-Aspden Invention

    It is possible to combine three different metals to form a composite conductor which operates in a magnetic field so as to convert heat into electricity, without being subject to the Carnot efficiency limits. It is even possible to generate alternating current as output, which means that it can be extracted through a transformer coupling at an elevated voltage.

    Consider Fig. 5. There are three metals Cu, Ni and Zn bonded together in the manner shown. The wavy line indicates the passage of heat, its direction being transverse to the direction in which electric current oscillates. A magnetic field is applied in the third orthogonal direction. The magnetic field acts on the nickel to polarize it sufficiently, say to about 80% saturation.

    When current flows to the left it favours passage from Ni to Cu, owing to the fact that there is Peltier cooling at the junction between the nickel and the copper. When current flows in the reverse direction, to the right, it flows from Cu to Zn and from Zn to Ni, cooling at the first junction and heating at the second. Of itself, this Peltier action of cooling and heating is productive of heat overall, meaning that the a.c. current will result in conversion of electricity into heat, which is not of special interest. However, that heat flow through the nickel plays an important role by virtue of the Nernst Effect.

    When current flows to the left much of it flows through the full body of the nickel sector. In travelling transversely with respect to both a magnetic field and a temperature gradient the current is aided by an induced E field. There is additional cooling in the nickel. When current flows in the reverse direction it is opposed by that same E field in the nickel induced by the temperature gradient, which is why it flows through the zinc instead. It finds passage into the nickel only at the extremity of the nickel sector. As a result, the overall effect can be a Nernst cooling which far outweighs the Peltier heating and that means that we can generate a.c. from the heat flowing into this metal structure.

    Note that with astute use of thermal insulation over the Cu section and on the left hand portion of the Zn as well as on the right hand edge surface of the Zn section and underside of the Ni section, the heat flow can be guided along the required path.

    This is sufficient to explain the principle of operation implemented in what is a fairly robust design, but one demanding the external application of strong magnetic fields (as by use of permanent magnets), a mutually orthogonal flow of heat. It needs some confidence to go to the trouble of building such a device but there is little complication in circuit design if the a.c. power source is used as input and the load circuit is itself an electrical resistor which represents a load for test purposes. The initial prototype design could aim to take an input of heat at, say, 30oC and allow its passage through the device to a cold heat sink at !10oC.

    Now, if you have understood what has already been said about the Nernst Effect you will know that the temperature gradient promotes heat inflow to the device and that we use magnets to deflect that energy so that, instead of reaching the output to the cold heat sink, much of that energy is converted to augment the electrical power in the load circuit. Temperature, as such, meaning an absolute measure in Kelvin does not feature in the coefficient governing the Nernst Effect and so one could expect, say, 50% of that heat to convert to electricity that can reach the load.

    So now suppose that you can demonstrate this, guided by what is disclosed to you here in this Report and also in Energy Science Report No. 2, and consider how you might develop the prototype to the next stage.

    This involves using a reversed heat engine, such as a vapour compression machine. A coefficient of performance of 6 can be expected for the above temperatures, meaning that for every joule of electrical input there is 6 joules of cooling. Yet, at the 50% conversion efficiency rate of the Nernst device, this would mean a heat input of 12 joules from the 30oC heat sink. As augmented electrical circuit power we would generate 6 joules of electricity from the heat absorbed in transit. Of that, we can feed the 1 joule back to the reversed heat engine to sustain its operation. There is margin here for a good measure of operational losses, but a significant net power gain delivered as electricity is seemingly quite feasible, with other spin-off benefits if we wish to use the device for heating purposes as well.

    To keep the energy books balanced there has to be a matching external inflow of heat energy at 30oC equal to that delivered as electrical power. The logical source for that heat is the ambient atmosphere and so, if that temperature is lower in value, then the engine must be designed and set to operate between different temperature limits.

    Conclusion

    There is one point at this stage that warrants comment. So much is said about the First Law of Thermodynamics by those not involved in the real technology of the thermodynamic field, everyone quoting it as a statement of something that says it is impossible to do what we have described above. However, Thermodynamics was one of the five final examination subjects I took in my honours level university degree. It was in engineering and involved extensive practical testing of many different heat engines. The ‘Heat Engines’ textbook we used was written by the Director of the Engineering Laboratories of that university. In his words, the First Law of Thermodynamics “may be stated as follows: Heat and mechanical work are mutually convertible.”

    To me, that is implicit in the ‘thermo’ and ‘dynamics’ expressions, just as if we refer to ‘electrodynamics’ we are discussing the conversion of electric to mechanical work and vice versa. In mechanics generally, which embracing heat as the kinetic energy of the molecules in the heated fluid, there has to be compliance with the laws of action and reaction and `perpetual motion’ is ruled out on that count. In electrodynamics generally, as is well known from the accepted theory of the interaction force between two isolated charges in motion, there can be out-of-balance forces. These signify energy exchange with the inductive field environment. In my above account concerning the Nernst Effect I am referring not to the motion of molecules as conveyers of heat, but electrons moving freely as discrete charges, these being the primary carriers of heat through metal. The magnetic field effect harnessed puts the action discussed outside the realm of thermodynamics and brings it into the field of electrodynamics. The latter is a subject I have researched far more than thermodynamics and the motor research, the subject of other Energy Science Reports in this series builds on that research background.

    Whatever the expert and academic background of the reader, the interdisciplinary nature of the subject I am discussing here cannot be ignored by assuming that someone else will be able to fault my claims. If I am deemed to be wrong, there is need to prove me wrong, not just assume I am wrong, because, if I am right, then the technology is indispensable in the onward quest to solve our energy needs. As a starter, one ought to begin by explaining an observed fact evident from the Strachan-Aspden devices as demonstrated. Though we do not have an operational unit any longer, there is a video record of operation. They were bimetallic assemblies of symmetrical construction as between the two heat sinks. When operating in Peltier mode to generate a heat difference with electrical power input along a transverse path parallel to the planes of the heat sinks, it should have been a 50:50 chance at switch on, whether heat sink A cooled or heat sink B cooled, as the other heated. That transverse path was through the planes of Ni-Al metal layers, stacked between those heat sinks. Yet, in every test witnessed, there was always cooling of the exposed heat sink, whereas the other heat sink was a metal base on which the electric circuitry was mounted, which meant that any excess power generated was dissipated as circuit current heating. The logical answer, as I see it, is that there was a cooling process supplementing and indeed overriding the Peltier Effect, and that cooling, I submit, can only be the Nernst Effect cooling which I have discussed at length above.

    PART II

    Introduction

    This is the third of a series of reports which are intended to serve as a technical briefing helpful in the evaluation of invention rights by expert opinion more familiar with conventional technology.

    The subject matter concerns thermoelectricity and ferromagnetism as applied in a novel combination aimed at a fundamentally new technique for converting heat and electricity using a solid state apparatus. It is foreseen that the technology proposed will allow the efficient generation of electricity on a scale that can serve as a main power supply based on non-polluting heat sources that need not necessarily be nuclear in form. The same technology is also seen as a substitute for CFC refrigeration and air conditioning systems.

    The background to what is here described is the demonstrated research prototype devices built by John Scott Strachan and incorporating principles which are the subject of patent rights of which this author and Strachan are co-inventors. Both Strachan and the author are independent in a research sense, meaning that the project work reported is not the work of research in a corporation or institutional laboratory. The physics involved in the invention is somewhat challenging and, in the light of established doctrines and practice, not easily understood without a special briefing. Indeed, without such a briefing as this, comment by experts on the technical viability of the technology, notwithstanding its demonstrable features, could impede the assessment task confronting technical advisors to those corporations we hope to interest in the needed R&D that the technology warrants.

    This Report No. 3 sets out to explore a new aspect of the technology and is supplemental to ENERGY SCIENCE REPORT No. 2 which is really a packaged assembly of prior reports and patent information covering the earlier activity on the Strachan-Aspden invention. The latter Report plus a demonstration of the working device in its refrigeration and power generation modes, as shown also in a video record, have been the basis in our efforts to engender interest by those who can take the R&D forward.

    This Report aims to point researchers in the direction which this author perceives as being the most promising from a product development point of view. However, in the absence of documented legal agreements, the Report should in no way be deemed to confer any implied free right of use in connection with what is disclosed, as proprietary rights are reserved, as by prior patent filing.

    Preliminary Observations

    The prototype demonstration devices all had a working core assembled from strips cut from an electrically polarised polymer sheet that formed the substrate for a very thin bimetallic surface coating of nickel and aluminium.

    The relevant features from a functional viewpoint are:

    1. The fact that the nickel is ferromagnetic.
    2. The fact that the two metals have contact interface that can intercept heat flow confined to the plane of the material.
    3. The fact that, in being very thin, the metal, when transporting heat, could operate with fairly high temperature gradients in the metal.
    4. The fact that the substrate was a heat insulator and a space filler separating the conductive metal films and so ensuring that losses by heat flow between hot and cold heat sinks were minimal.
    5. The fact that, in lending itself to assembly in a structure that became a series-parallel plate capacitor, we could excite electrical oscillations transverse to the bimetallic junction interface plane, which meant that Peltier EMFs were directed in the line of current flow and such flow was through metal over an area of large cross-section which meant virtual elimination of I2R loss.
    6. The fact that the two metals were opposite in electrical character, meaning that their charge and heat carriers were of different electrical polarities, a feature which, by virtue of the Thomson Effect, means that in-plane current flowing and circulating between the hot and cold sides of the device derived its power directly from heat throughput and did not drain the electrical power fed by the Peltier EMF.
    7. The fact that the device functioned as it was designed to function with a.c. as the transverse input-output form of power, because the Thomson current could divert the flow between hot and cold junction sides of the device according to the reversals of current polarity.

    What we did not know is the design scope for increasing the thickness of the laminar metal in the device and whether the use of an a.c. operating frequency measured in tens of kilohertz was essential. It seemed from our earlier research that we needed that frequency activity to prevent a kind of lock-in effect forming super-cold spots in the Peltier-cooled portions of the junction interface or one needed a prevalent magnetic field which, by Lorentz force effects, could promote a shifting or displacement of the current flow traversing a junction. Strachan had suspected that there were oscillations developing in his d.c. experiments on thermocouples subjected to strong permanent magnet fields.

    Furthermore, and having regard to certain other early experiments that were performed on thick metal assemblies subjected to high temperature operation, but operating at power frequency, we may well have neglected the role of the Nernst Effect in interpreting the performance data in our high frequency capacitor coupled devices. It is noted that it was the Nernst Effect which this author saw as the crucial factor in the operation of the three metal device that became the subject of our U.S. Patent No. 5,065,085 (corresponding U.K. Patent No. 2,225,161). However, Strachan in his funded experimental work concentrated attention on the form of device that was so impressive in generating power from ice and in freezing water with electric battery power input using that capacitor assembly and the subject of those other patents remained undeveloped.

    It may be noted also that the whole foundation of the cooperation between Strachan and myself was our correspondence and exchanges some time even before we first met en route to a 1988 Canadian symposium on ‘clean’ energy. Those exchanges concerned the prospect of fabricating thermocouples by laminar thin film assemblies, virtually almost as a book-binding operation, and particularly our findings that magnets have a significant effect on the way thermocouple junctions perform.

    A practical problem which confronted Strachan was that a great deal of effort was needed to cut and assemble hundreds of tiny pieces of polymer film in a way which avoided short-circuits between adjacent metal films and yet allowed connections to be made linking into what was a combined series-parallel capacitor structure. In being of small size, meaning limited power rating, there was then the task of designing and connecting an electronic circuit that could develop the necessary oscillations and take off power without interposing obstructive circuit contact and threshold potentials, whilst, to get adequate current flow through the series-connected sections of the capacitor, resonant operation at high electrical stress in the polymer dielectric was necessary.

    Indeed, the manual assembly problem and circuit design exposed, in a sense, Strachan’s Achilles’ heel and it was this that precluded extensive onward diagnostic testing based on building several test structures using different design parameters, eg. choice of metal combination, metal film thickness and substrate material. There were other factors too, mainly arising from the route Strachan had followed in his earlier corporate research employment, which had involved the bonding of stacks of the polymer film in a structure which was acoustically tuned to set up mechanical vibrations for a medical application. That project had encountered operational difficulties owing to delamination and the assembly technique, though tedious, had evolved to overcome that problem whilst still needing excessive care to avoid electrical end-shorting as the bonded film components were cut to size.

    In testing the structures assembled for that project Strachan had found that spurious electrical effects were overloading his test circuits and that there were instabilities that could be stimulated, seemingly by static charge, heat or mere physical displacement associated with manual handling.

    Another problem, concerning the later thermoelectric research, was that there was some uncertainty, at least in Strachan’s mind, as to whether the piezoelectric properties of the polymer were contributing in some way, even though this seemed to be ruled out of significance by the a.c. operation. My assumption on this was that the heating and cooling that accompanies cycles of voltage potential would compensate one another. There simply had to be an operational asymmetry from a functional point of view if net heating and net cooling were to link with the highly efficient electrical energy exchange that was in evidence.

    It was only late in 1992, stimulated by a new funding sponsor interested in the power generation applications, that Strachan about measuring the magnetic field that was of necessity produced in the thin film by the Thomson Effect current circulation. His findings were of such an unexpected nature that he then stated we need have no more circuit switching problems. He became convinced that we were dealing with a phenomenon in the metal and not one in the polymer dielectric.

    Thermodynamics Limited, which was the vehicle through which the patent rights were to be exploited, was granted an option to acquire rights under the U.K. patents which allowed the contractual arrangement with the sponsor and the R&D appraisal funding was passed on by sub-contract to Strachan’s laser-orientated venture Optical Metrology Limited.

    Strachan’s research findings were then covered by Thermodynamics Limited filing a U.K. patent application on 6th February 1993, but since then no further progress has been made and no information forthcoming from that project. In the event, therefore, though this patent application was officially published as GB 2,275,128 it was not taken further and so became abandoned.

    However, it is of relevance to this Report to summarize the scientific nature of the above discovery and, no doubt, more information will eventually be forthcoming from research endeavour of others now interested in that subject. In this connection, it must be stressed that, although we believe the phenomenon of interest is occurring in the metal, it is clear that there are some advantages in using the polymer PVDF because, in the form used, it has a strong electric polarization which allows one to control the cyclic changes of very strong electric fields at the interface with a metal film. This had a quite remarkable effect on the magnetic polarization developed in nickel but only when a temperature gradient was present in the plane of the film.

    The main thrust of this Report, which does not concern polymer features, is the onward research now needed to fabricate a product version, based on this author’s own independent research findings, and this requires some discussion of the Nernst Effect.

    Solid-State Magneto-Hydrodynamic Power

    It is assumed at this stage that the reader will have access to the detailed description of the Strachan-Aspden technology as described in ENERGY SCIENCE REPORT NO. 2, it being the sole objective now to develop the research theme (4) of the above list.

    In order to focus the attentions of industrial interests it seems best to outline at this stage the anticipated constructional form of a possible product.

    The core design would seem to be one having two planar metal heat transfer surfaces bounding an internal assembly. A temperature differential between these two surfaces is associated with heat flow through laterally-disposed metal ‘rib-like’ connections within the structure. Some means for electrical activation of a cross-current flow transverse to the heat flow direction is then needed inside the panel unit thus formed.

    The latter could, as simple logical alternatives, be couplings or connections that are either capacitative, directly conductive (through heat insulating polymer) or magnetically inductive, whichever is the most effective and reliable as well as viable on price considerations.

    This description, however, presents a quite ingenious way of using the conductive coupling without the design limitations of capacitance coupling. It has the merit of also being the most expeditious research route for this author in present circumstances, pending developments on the conductive polymer front.

    Skin Effect Segregation

    In order to get current to traverse a metal in the y direction with heat flow in the z direction and a magnetic field in the x direction, one can provide a metal interface between metals A and B across the xz plane and feed a.c. current through the structure in the y direction in a way which restricts the current flow to a metal surface of the A metal, whereas heat flow in the same direction is spread over the cross-section of the conductor and not subject to such a restriction. Then that heat flow can enter metal B from metal A at a position in metal B behind the current traversal position. This means that the heat flow can then turn to flow in the z direction transverse to that current in the y direction.

    The way to do this at normal power frequencies is to provide a rather thick high-permeability ferromagnetic conductor as metal A with metal B being a very thin ferromagnetic conductor. The current-heat segregation arises from eddy-current skin effect in metal A and, in traversing a thin section of metal B, the current will choose a flow path that is through a single ferromagnetic domain in metal B in which the magnetic polarization is transverse to the heat flow path and the current flow path and also in the direction that causes the Nernst Effect to develop a forward EMF.

    An experimental test rig by which to verify this action is depicted in Fig. 6(a) and Figs. 6(b) to (h) apply to its operating principle.

    Referring first to Fig. 6(f) the intention, for operation in the power generating mode drawing on heat supplied through a duct intermediate two panel structures in Fig. 6(a), is to cause heat to enter a thick metal heat sink layer of a ferromagnetic material coextensive with the surface of the duct. Then the heat flow is guided through 50 micron (0.002 inch) steel film which greatly reduces the thermally conductive cross-section and so results in a steep temperature gradient in that film. If much of this heat can be removed at entry into the film owing to a thermoelectric action there will then only be a small residual heat flow of much reduced temperature gradient left to provide direction for that flow towards the other heat transfer interface.

    To use the Nernst Effect to discharge this cooling function one needs, firstly, a strong magnetic field, then, secondly, a flow of heat and, thirdly, a route for current flow, all mutually orthogonal. This means that one must contrive a way of ensuring that the heat flow does not automatically assume the same path as the current.

    How can we separate heat flow and current flow in a common metal structure without having the current flow through a capacitative gap in the metal? Consider Fig. 6(d). Here a thick steel rod is shown to have an external current circuit which is arranged to magnetize the rod along its axis. One can set up a temperature differential along that axis and have heat flowing through the rod distributed uniformly over its cross-section. If now the current is a.c. and the rod has a high magnetic permeability and is thick in relation to that frequency, eddy-current skin effects will confine the current to a section close to the perimeter surface of the rod. Most of the heat will then enter the rod along a path that is different from that of the current. Should that rod be cooled in some way from the outside, as with the Fig. 6(f) representation, that small amount of heat loss will require heat to be diverted radially, and so orthogonally, with respect to any d.c. magnetic field along the rod axis and with respect to the induced skin current.

    In this scenario one would have heat converting into electricity by Nernst Effect cooling, but all that would occur would be enhancement of the eddy-loss as the current escalates and the skin-effect becomes even more restrictive. Heat would be regenerated and there would simply be anomalous eddy-current effects, a phenomenon actually found in normal transformer sheet steels, but one that has not been understood by the academic establishment.

    The way forward is to consider now Fig. 6(e). Here, the circuit loop is an elongated part-tubular thick ferromagnetic core and it is presumed that the a.c. current is supplied in the manner shown. In this case, the eddy-current skin effect is not on the outside of the core. The current is concentrated at the inside surface. The reason is that the current follows the path of least resistance and it would rather flow around a thin metal path embracing what is mainly an air core than around the highly inductive ferromagnetic core as well.

    This configuration allows us to intercept the EMFs that are developed by the Nernst Effect, because they constitute `forward’ EMFs assisting flow, meaning that, if we can conceive heat action bringing about electrical current oscillations in a converter which incorporates the principles being described.

    Now, rather than providing a special magnetizing winding for producing a powerful magnetic field in the metal and then contriving the heat transfer interfaces inside that winding, it seems best to make use of the intrinsic domain magnetism in a core comprising thin steel film.

    Refer now to Fig. 6(b). Here, several such films or laminations are shown to be sandwiched between two steel bars. The bars form an energy route for heat which is to flow through the steel laminations and be bled off through those laminations to seek a cooler heat sink at their extremities. A conductive metal, which need not be ferromagnetic, in between the base section of the laminations merely serves to provide the conductive path makes their connection.

    Looking at a single lamination, one can see from Fig. 6(c) how the saturation magnetic flux in the domains can be directed. The dots and crosses denote arrow directions as pointers indicating the magnetic flux orientation. Any current passing transversely though the lamination will be aided by the intervening non-magnetic highly conductive metal to guide it through the domain which offers least resistance. Going one way through a domain will develop a back Nernst EMF and going the other way will develop a forward Nernst EMF. According to its polarity the current will always choose a route through a domain which offers a forward Nernst EMF. There will always be cooling if there is an orthogonal temperature gradient.

    This is shown in Figs. 6(g) and 6(h), with the skin effect serving to separate the heat flow path (broken lines) and the current flow path (full lines). The domain pattern in the thin film, whether in nickel or iron, which have different polarity Nernst Effects, will always govern the current flow and contribute to cooling.

    Note that, by using thick steel bars to provide the heat sink spacer members, the heat can flow freely to the entry into the domains in the film. The skin effect, even if such that current is confined to what is effectively a 10% section of the thick steel bar, will only suffer the resistance losses that relate to flow in that restricted section, but set alongside the power generated by the Nernst Effect cooling action, this is a quite small loss and it merely regenerates heat input. The only loss as such is that of heat conducted to the remote secondary heat exchange interface.

    Thus, in Fig. 6(a), the structure shows how current circuit connections can be made to link with a transformer core and other windings coupled to external circuits. The bold lines indicate a thin layer of electrical insulation, which still allows passage of heat, it being necessary to avoid the short-circuiting of the power generated. The rib-like connecting films can, in functional terms, be left floating in a cooling medium with no assembly for the secondary heat exchange interface.

    An inert gas blown across the fins thus formed would serve as the means for assuring the temperature gradient which activates the main cooling function drawing on heat supplied to the inner duct.

    The Magnetic Inductively-Coupled Device

    Whereas the above description concerns a electrically-conductive coupling between separate compartmented sections of a thermoelectric power converter and uses a transformer coupling externally to bring together the power generated from heat in each metal film current crossing, one can see scope for building a version which relies directly on a magnetic inductive coupling in each cell of the structure. In effect, this involves building a transformer within each compartment. This is shown in Fig. 11 of a recently filed patent application. (See Appendix I on page 37).

    The device would comprise internally a series of longitudinal compartments each containing a slender rod-like magnetic core. A winding on the core would provide the circuit by which input or output a.c. is fed through all such windings connected in series and possibly through an isolating transformer.

    The heat transfer problems of the outer bounding metal surfaces and their design will not be discussed as these are familiar terrain for those involved in the relevant industries and this is not intended to be part of a business proposal for setting up a manufacturing venture.

    Noting that the bimetallic metal circuit linked by each ferromagnetic core is virtually a short-circuit, the reader will understand that very little EMF has to be induced to set up the internal current circulation. That said, the cross-section and operating magnetic flux density can be quite small, even though the design aims to take off substantial current by transformer action. A very close coupling as between the heat-driven primary circuit and the secondary as output winding is then essential and that is assured by the enclosed compartment feature which provides a conductive housing along the whole length of the ferromagnetic cores.

    The latter do not need flux closure structure in small product applications, because being very long in relation to their sectional dimensions, they have very little demagnetizing effect and such inductance as does exist is to the good, because it will serve to smooth out the loading as between the several cores.

    Experimental Investigations

    The objective of the experimental investigations can be set in context if one refers to a vector diagram showing the normal operation of a power transformer on load and on no load.

    The focus of attention is the eddy-current induced in a single lamination as normally used in a transformer.

    The current flow parallel with the surface of the lamination is subject to resistance and that flow accounts for virtually all of the eddy-current loss, whereas the flow of current at the edges and transverse to the main flow makes the short crossing of the thickness of the lamination and will take the path of least resistance.

    A lamination that has a 200 micron thickness is one in which single magnetic domains could well span the full thickness, meaning that a current which has, near those edges of the lamination, to flow from one side to the other can do that by passage through that single lamination. This contrasts with the main flow parallel with the surface, in that the latter has no choice but to travel through the succession of domains in its path which have polarizations first one way and then the other way. Given that the temperature gradient in the lamination will normally be confined to flow in the plane of the lamination and across its width as opposed to its length, that makes it orthogonal with the current crossing the thickness and with the magnetic polarization of some of the domains. That current can choose a domain for which the Nernst Effect asserts a cooling action to give an EMF impetus to the current flow.

    In summary, therefore, we see that the Nernst Effect in a thin transformer lamination will drive eddy-currents as if it introduces a negative resistance in the eddy-current loop circuit, given the natural heating that occurs anyway with the presence of magnetization loss.

    The question at issue is how this Nernst Effect can be represented on a vector diagram showing transformer core operation and the answer to this is that it amounts to a forward EMF driving a current in anti-phase with the back EMF and so amounting to a magnetomotive force (MMF). It may now be realised that there are interesting considerations when one makes a comparison between the on-load operation of a transformer and the no-load operation of a transformer.

    Remember, however, that, without the temperature gradient, there is no Nernst Effect. Provided, therefore, we operate the transformer on load and try to avoid the no-load situation, which aggravates loss owing to the Nernst Effect, there is special advantage from a power generation point of view in accentuating the Nernst Effect by incorporating the bimetallic lamination feature. This transfers heat in a way which can sustain the thermal gradient and allow the transformer to provide more output electrical power than needed as input, by virtue of the Nernst cooling action which draws on the external heat source.

    In its own curious way, this on-load-no-load distinction between the magnetization loss action in a transformer can account for the lack of concern about the eddy-current anomaly by transformer design engineers. The loss anomaly is there to be seen if one tests the no-load properties but transformers are so efficient when operating on load that one need not worry about it at all. However, if the subject under discussion in this Report becomes a technological reality, the transformer that accentuates the Nernst Effect by using thinner bimetallic laminations will bring that eddy-current anomaly more to our attention.

    Discussion

    The tests just reported tell this author that the Nernst Effect, if exploited in thin ferromagnetic laminations, having surface provision for guiding conduction current through single domains selected naturally by `path of least resistance’ action, and having a temperature gradient that is in-plane in the lamination, will serve as a very efficient cooling device.

    The findings reported in the test conforming with assumption I are very exciting in that, given confirmation by further experiment, there is reason to believe that a self-generating action is possible. It may even be that we can feed heat into a transformer implementation and with a controlling primary input get the main power from the secondary.

    This combines refrigeration and power generation, not just as variants on the same technology, but as one and the same, though to get main power generation on a kW per unit weight of apparatus basis one will need to force-feed heat energy input.

    The technology needed on the basis of the reported experiments is one of intercepting the `eddy-current’ flow and that is essentially what we see in a power transformer, because the secondary windings are circuits in which eddy-current flow if the output connections are shorted. We simply need to incorporate a secondary winding that can react inductively to the `eddy-current’ circuit or include the Nernst activated elements into that secondary circuit directly.

    These issues, therefore, become design questions that involve R&D of a proprietary nature and it is submitted that this ENERGY SCIENCE REPORT has served its purpose of introducing prospective development interests to the potential of the technology outlined above.

    The technology is destined to provide a very effective route to converting heat into electricity and, since the principles of operation depend upon temperature gradients and not absolute temperature in Kelvin, there is no Carnot factor to limit performance. There is therefore, with some internal heat recycling, clear scope for a near to 100% conversion of heat into electrical power and at least refrigeration prospects down to 77K, where the warm superconductor regime offers future promise.

  • ENERGY SCIENCE REPORT NO. 2

    ENERGY SCIENCE REPORT NO. 2

    PART I: POWER FROM ICE: THERMOELECTRICS

    Introduction

    This Energy Science Report summarizes the development status of the Strachan-Aspden thermoelectric energy conversion technology as of May 1994. Onward research from that date is the subject of Energy Science Report No. 3.

    The basic invention is the brainchild of its coinventors Dr. Harold Aspden of Southampton, England and John Scott Strachan of Edinburgh, Scotland and it dates from their first meeting in Canada on the occasion of a New Energy Technology Symposium held in 1988 under the auspices of the Planetary Association for Clean Energy.

    In its conception, the invention merges the technical disciplines of magnetism (Aspden) and piezoelectricity (Strachan) in a structure which exploits, first and foremost, the thermoelectric properties of metal. In its onward development and promotion, the respective professional skills of the two inventors were brought to bear in laboratory assembly (Strachan) and patenting (Aspden). Geographic separation by 420 miles has precluded a close working relationship in pursuing this project in a normal technological development sense, it being a private venture by two individuals, each having other unrelated technical interests.

    In the event, what is an extremely important inventive contribution, that potentially can provide the non-pulluting refrigeration technology of the future, has remained undeveloped, notwithstanding some small external R&D funding that has been of assistance to Strachan.

    There are not, of record and suitable for issuance, any detailed experimental tests or results provided by Strachan. Almost all the documentary material that has been made available until now has been generated by this author (Aspden), mainly in a patent attorney or promotional capacity. Much of this latter information is the basis of this Report. One appended item that is new at this time is the account which Strachan prepared in February 1994 describing the polymer PVDF structure and fabrication of the first and third demonstration prototypes which he built. The issuance of this Report at this time follows the recent grant of the relevant U.S. Patent No. 5,288,336 dated February 22, 1994.

    The object of this Report, therefore, is to arouse interest in the Strachan-Aspden invention in those corporations having the necessary R&D resources or ability to fund such R&D in academic establishment laboratories with a view to the disposition of the patents involved.

    Concerning the Patent Rights

    [Note added June 2003 when this Report is made available on the author’s websites www.energyscience.co.uk and www.aspden.org . The comments about patent rights which follow no longer apply as the patents involved were not kept alive, owing to lack of interest by prospective developers having the necessary disposition to fund the onward research needed. However, for the record, the text below remains unamended from its initial form as published in 1994.]

    The schedule of patents relating to the Strachan-Aspden technology forms APPENDIX I. The author, in his Attorney capacity representing the proprietor interest in these patents, is empowered to negotiate options or outright assignment. Based on the introductory technical briefing offered by this Energy Science Report, and the information now being incorporated in further reports, the author also makes himself available for some limited consultation on onward development by those parties who enter into the necessary Agreements.

    The abstract and title page of the principal U.S. Patent, 5,288,336 is included in APPENDIX I and those interested in the detailed disclosure and claim cover will no doubt wish to acquire and inspect a copy of that published patent specification.

    Essentially, the details of the operation and technology underlying the invention can be understood from the descriptive material provided later in this report, but there has been a shift in emphasis as to this author’s technical appreciation underlying physical functioning of the invention and this features in certain additional patent applications which have been filed and which, though listed in APPENDIX I, will form the subject of Energy Science Report No. 3.

    In order, however, to assist the reader who does inspect the primary U.S. Patent No. 5,288,336 and also U.S. Patent No. 5,065,085 and seeks a simple insight into how this author now views the underlying physics, the diagram in Fig. 1 below may serve.

    Fig. 1 Nernst EMFs induced in nickel by heat flow

    When a temperature differential exists in a thin film of nickel sandwiched between dielectric insulation in a parallel plate capacitor, the heat flow carried by electrons is deflected by the strong polarization fields in the oppositely polarized single magnetic domains that bridge the film thickness. This, by the thermoelectric phenomenon known as the Nernst Effect, develops an electric field polarization as shown by the arrows. It is orthogonal with respect to the direction of heat flow and the magnetic polarization. It may then be understood how a lateral oscillation of current flow through the capacitor can choose a flow path on successive half cycles so as always to transfer charge across the metal plate electrodes to draw power from an assisting EMF by avoiding the path obstructed by an opposing EMF. Cooling must then result as that power transfers into the external circuit. The dielectric insulation obliges the heat flow in the nickel to remain orthogonal with the current flow direction and also with the magnetic polarization which is necessarily in-plane in the nickel.

    Development Status: May 1994

    [The figure references in this section apply to the patent specification drawings included at pages 7 and 8 of this Report]

    There were three techniques in the original conception of the invention. The common feature was the idea of using a capacitative coupling to block heat transfer between the hot and cold heat sinks whilst contriving thermoelectric energy conversion. Strachan advised that all three had been tested experimentally and were viable.

    The one ready for demonstration (the capacitor stack) was given preference for onward development. The strategy adopted was to file a first patent application showing capacitor use in the heat blocking sense (Figs. 1 to 4 of the 18 November 1988 patent filing – same as those in U.S. Patent No. 5,065,085) and a brief disclosure of the stack (Fig. 4) but not disclose the detailed assembly of the stack. A second U.K. application filed 5 December 1988 added Figs. 5 to 8 and covered that detail and described the prototype version of the stack as I understood it at the time.

    In the event the capacitor heat blocking proved not to be of particular merit but we had a basic invention in the disclosure in that the confinement of heat flow to the bimetallic capacitor plates with transverse current oscillations gave remarkable results.

    The subjects of Figs. 1 to 3 were not developed further, even though they have merit. I persisted in securing patent cover in U.K. and U.S.A., the latter, as just indicated, being granted as US Patent No. 5,065,085.

    The patent cover which followed from the capacitor stack was adjusted and tailored to the diagnostic findings that emerged from the research on the second prototype and the international patent filing including US filing did not replicate the features of Fig. 7 or Fig. 8 or include the acoustic oscillation feature that was incorporated in the first prototype.

    The First Prototype: September 1988

    This was a capacitative polymer dielectric stack with bimetallic Al:Ni coatings and provision for acoustic oscillation of interleaved premagnetized magnetic recording strips (see Fig. 8).

    Strachan has, only in February 1994, and in preparation for a visit from overseas by interested corporate project engineers, documented the detailed constructional techniques of that first (and later third) prototype. This forms APPENDIX VI.

    Fabrication is very complicated and it is not suggested that the resulting devices did anything more than prove that we have discovered an energy conversion principle that has very outstanding merit. The task ahead is to develop on the test findings of the much simplified second prototype.

    The Basic Principles and the Second Prototype: October 1989

    This was the technology on which the multi-national patent filing was based, claiming the priorities of the 18th November and 5th December 1988.

    The principle is evident from the following diagram:

    (a) Thermoelectric current flow: no transverse excitation

    (b) Junction cooling on left with transverse up-current

    (c) Junction heating on right with transverse down-current

    Note: (1) We are using a.c. with negligible I2R loss.
    (2) The circulating current is that carried by heat flow (Thomson Effect – metals of opposite electrical polarity).
    (3) The dynamic a.c. current interruption increases the thermoelectric power enormously (avoids junction cold spot formation).

    At that time (October 1989), though the Nernst Effect was in mind and had been mentioned in connection with the disclosure in U.S. Patent No. 5,065,085 and though nickel, a ferromagnetic substance, was one of the two metals in the test device, it was not then realised that the Nernst Effect might also play the key role in the functioning of that second prototype device.

    The October 1989 status is evident from the Test Report (APPENDIX IV). APPENDIX V provides a scientific analysis of the cold spot problem).

    The questions outstanding from those tests were:

    1. What frequency could we reduce to and still get the high thermoelectric EMF? The cold spot theory implied that we could operate even below 1 kHz, but the capacitor coupling limited the transverse current and that suggested building a direct metal conductor coupling following contours of constant temperature – so as not to divert heat from the junctions.
    2. What thickness of metal film could we increase to whilst not losing efficiency? Note that the Thomson Effect circulation fixed the current that could flow transversely owing to the half-cycle cut-off.
    3. Which metal combination was optimum?
    4. What fabrication technique was best to ease manufacture and assure reliability?
    5. Why was it that we seemed to be getting more transverse current flow than the design capacitance of the stack implied from the voltages we measured?

    In the event, early in 1990, Strachan was obliged to abandon all work on the project and the development fell dormant. This was owing to business failure of the sponsors on an independent manufacturing venture but Strachan was then unable to demonstrate a working prototype and we were, in effect, then in a worse position than at our late-1988 start point.

    The 1991/1992 Scenario

    Not having the resources to set up an experimental programme myself, and especially as I had to sustain the costs of the patents I decided to publish in the hope of attracting interest from corporations. I had nothing to demonstrate.

    My effort to publish in the Journal of Applied Physics caused a referee to say ‘publish but provided more detail is given as to actual construction of the device’, but the Editor felt my amended paper did not go far enough in that respect and so that initiative failed.

    By year end 1991 I had an acceptance from a U.K. electronics magazine (the July 1992 article in Electronics and Wireless World) and had a paper scheduled for the 1992 International Energy Conversion Engineering Conference in San Diego.

    The publicity in U.K. attracted corporate interest, and Strachan then took the initiative and assembled the third prototype. The showing of that impressed a major U.K. company interested in new energy development and they provided new funding for Strachan for a period of 8 months.

    I made sure that the demonstration was recorded on video and this has proved helpful in talks with interested parties. My personal objective concerning onward development has been to see the test device operate without reliance on the capacitor fabrication, either by an alternative conductive coupling between the bimetallic laminations or by magnetic inductive energy transfer, i.e. by intercepting the thermoelectric current by an inductive back EMF.

    By year-end 1992, after 4 months of the new funding, Strachan reported on a test whereby, given a temperature differential in the bimetallic lamination, the magnetic flux could be controlled at 20 kHz by an electric grid control. However, that research did not progress to his satisfaction and I have insufficient data for me to make sense of the outcome of the experiment. The funding for Strachan’s research ceased at the end of April 1993.

    The Aspden Experiment

    In September 1993 I decided to initiate my own small experiments, based on the approach I had been advocating, namely to build a magnetically inductive core system and feed in eddy-current heating to set up a temperature gradient in bimetallic laminations. The idea of this, regardless of application to refrigeration or power generation, was simply to have electrical control throughout the test and determine the relevance of the ferromagnetic property, metal thickness and excitation frequency.

    The outcome of the first experiment has been described in Energy Science Report No. 1 and further onward experiments will be described in Energy Science Report No. 3.

    However, some interesting problems have been encountered in the latter pursuit and the quest to operate in an all-metal high current mode with no dielectric laminations and capacitor drive, which is aimed mainly at solid-state electric power generation from input of heat at higher temperature than is normal where polymer dielectrics are used, may prove too demanding for this author’s private research facilities.

    Accordingly, as a guide to readers interested in pursuing the alternative capacitor construction based on that simple Nernst Effect principle as mentioned on page 2, the following analysis is included.

    Capacitor Stack: Design Considerations

    Consider nickel film to have a thickness δ cm and the form of a 1 cm by 1 cm square. Assume a temperature difference of 1o C from one edge to the opposite edge and denote the specific thermal conductivity of nickel as K watt-cm2/oC which implies a throughput heat flow of Kδ.

    This heat flow is heat input loss if we do not intercept the heat and deploy it into electrical output.

    The Nernst Effect has a coefficient for nickel which depends upon whether we use nickel I or nickel II, the latter being larger by a factor of nearly three. On the basis of the data of record from experiments by Zahn reported in Ann. der Phys. 14, 886 (1904) and 16, 149 (1905) on nickel we can reasonably assume that a 10 V per cm Nernst EMF is set up at right angles to the heat flow for each degree C temperature drop per cm.

    It follows that the heat flow can be intercepted and deployed into output electrical power if we can provide for a transverse flow of current I amps without there being heat flow in that same transverse direction, with I given by the equality of 10Iδ and Kδ.

    We see that the thickness δ makes no contribution to the heat to electricity conversion efficiency. This thickness of the nickel merely has to be small enough to assure the single domain condition, say no greater than 50 to 200 microns depending upon the crystal size in the nickel.

    The task is to secure the equality of K and 10I, meaning that with the 1 degree C per cm gradient and K of the order of unity, I has to be 100 mA per sq. cm of capacitor area to get optimum operation. A higher temperature gradient requires a proportionally larger current flow.

    The design consideration then centres on the a.c. operating frequency and capacitance of the structure. If the dielectric thickness is of the order of 10 microns and the dielectric constant is 10, both of which are demanding design parameters, then a capacitance of the order of 1 nanofarad applies to the 1 cm. square nickel plate electrodes used and operation at about 16 kHz will give a current of 0.1 mA per volt. It would need 1,000 V across that 10 micron dielectric to give the 100 mA current requirement.

    This is voltage stress requirement is too high and, also, there is another problem governing the combination of design parameters. This is that, if the thickness of the nickel is so much greater than the thickness of the dielectric, then the current ‘sees’ more a flow through the main surface of a ferromagnetic film and less the transfer of distributed charge on the surfaces of a dielectric. This can make the current bunch up by a pinch action in that ‘negative’ resistance flow path through the metal and, to avoid this, the dielectric has to be thicker than the nickel.

    The design therefore proceeds by first deciding the operating limits on the voltage of a resonant capacitor stack using the inductance of the flow through the nickel in combination with the capacitance of the stack to determine that frequency. This sets the thickness of the dielectric. The nickel, if deposited on a substrate dielectric can be quite thin, say 5 microns, and it is this requirement for thin dielectric larger in thickness than the nickel, rather than the domain size factor, that obliges use of even thinner nickel films.

    Note that the target of a 100% energy conversion efficiency then will depend primarily upon the scope for increasing the electric breakdown strength of the dielectric used, assuming simple metal parallel plate capacitance. Alternatively, in order then to bring to bear a suitable combination of parameters that allow moderate voltage gradients in a dielectric whilst allowing the current throughput to be adequate at a reasonable frequency, the way forward is to incorporate in the design the technology of electrolytic capacitors.

    This, as this author sees the situation, is what Strachan did in building his prototype device using a PVDF polymer dielectric and it follows from this preamble discussion that those best able to develop the subject technology are those corporations who are already manufacturers of electrolytic capacitors.

    Given then that the Strachan device did perform as a cooling device and as a heat pump able to convert heat into electricity one sees the prospect of developing a thermal electrolytic capacitor that will convert heat to electricity or serve as a Nernst Effect heat pump with no Carnot limitation on performance.

    The reason for this non-Carnot limitation is discussed in Energy Science Report No. 3, but it amounts to the observation that, with heat carried by electrons, the deflection of those electrons by a magnetic field occurs to bring them to thermal rest (effectively zero temperature K) as they transfer energy to the capacitor, followed by their recovery of heat by cooling the substance of their metal host. Carnot efficiency referenced on zero Kelvin is 100% as far as heat/electricity conversion is concerned.

    It then needs little imagination for any enterprising research organization to see that such technology, if proven in this particular respect, can provide a complete answer to the world’s future energy needs in that, by arranging a conventional Carnot-limited heat pump in back-to-back operation with the Strachan-Aspden non-Carnot-limited heat pump, and deploying atmospheric sources of heat one can generate electricity.

    Hitherto the non-Carnot-limited conversion of heat into electricity has been elusive but it is possible in that it already occurs in practice in one half of a thermocouple circuit but there is there the concomitant requirement that the electricity has to close the circuit through the other thermocouple junction which makes the reverse conversion.

    All that this Report is suggesting here is that the evidence from the transverse-to-heat-flow current excitation of a heated nickel-electrode capacitor shows how we can intercept the energy and make the non-Carnot-limited conversion without paying the full price of the reverse conversion at ambient temperature. The ‘lower’ temperature conversion occurs inside the metal as electrons are deprived transiently of their thermal energy. It occurs at positions in the metal where there is only one prevailing temperature. There is no way that Carnot criteria can apply unless there are two temperatures associated with that event and the only temperature that can differ from that prevailing in the metal is the temperature resulting when the electrons give up their thermal energy by being deflected into the charged condition at the interface surface of the nickel and the dielectric. That temperature has to be lower than the ambient temperature of the metal and the electron can only recover equilibrium and carry heat forward if it then takes heat away from the crystal body of the nickel.

    Given that the ferromagnetic plate electrode is the seat of the action associated with the Nernst Effect it may seem that there is no need to provide the bimetallic structure of the Strachan-Aspden embodiments. However, it is important to see that there is a two-fold benefit from the use of bimetallic laminations. Firstly, the second metal helps to spread the charge trapped at the interface between the metal and the dielectric and this allows it to participate more fully in the two-way oscillation of current flow. Secondly, the second metal brings to bear the Peltier Effect and this can help to sustain temperature gradients which activate the cooling. Note here that the first and third Strachan-built prototypes had an intrinsic design symmetry and an input current oscillation developed the cooling action with no input temperature gradient.

    In other words, the use of bimetallic plate electrodes meant that the back-to-back action described above was at work in those devices.

    This Report, therefore, highlights the importance of the Strachan-Aspden invention and hopefully will serve to excite the interest of those corporations having the resources needed for its onward development.

    Energy Science Report No. 3 will be issued when this author has completed some further experiments and, in the meantime, some of the findings will be available in confidence to sponsors.

    The APPENDIX sequence which follows comprises items written at different times as this project evolved and there are a few published articles and papers that are not included owing to the length of this Report.

    It is believed, however, that what is described or identified in this Report will serve as a guide to would-be researchers who wish to become involved in this subject and should suffice as full information about the invention.

    The prospective importance of this technology is so great, having regard to the need to avoid the pollution problems of existing refrigeration and energy generation technology, that it is hoped that others will take this project forward on their own initiative. Should any such researcher make progress in this regard, leading to demonstrable devices confirming the viability of the technology, then, so long as this author has control of the patent rights involved there is scope for merging interests in a joint venture.

    So far as the availability of rights under the patents is concerned, enquiries from corporations are invited but no licence deals can be entered at this time as the object is to sell the patents outright as a total package, which means that licence dealings will be for the purchaser to determine.

    This does not preclude an immediate undertaking in the nature of an option by which some nominal funding will secure a would-be developer, who already commands the necessary research facilities, an interest in the rights whilst evaluating the invention based on prototype building and testing.

    Enquiries concerning the patent rights should be directed to me and enquiries concerning availability of Energy Science Reports should be directed to Sabberton Publications (see address below).

    18th July 1994

    DR. HAROLD ASPDEN
    c/o SABBERTON PUBLICATIONS, P.O. BOX 35, SOUTHAMPTON, SO16 7RB, ENGLAND. FAX: Int+44-23-8076-9830. TEL: Int+44-23-8076-9361.

    APPENDIX I

    Schedule of Patents

    Patent applications listed as 1-10 below all have the title:
    “Thermoelectric Energy Conversion”
    and all were filed naming H. Aspden and J. S. Strachan as co-inventors. Dr. Harold Aspden purchased from Strachan-Aspden Limited all rights in these applications on 12th January 1992. This company, registered in Scotland, was dissolved in July 1992, as it had become more expedient to operate from a company, Thermodynamics Limited, registered in England at Dr. Aspden’s address.

    1. U.K. Patent Application No.: 8,826,952
      Date of Filing: 18th November 1988
      Grant as U.K. Patent No: 2,225,161
    2. U.K. Patent Application No.: 8,828,307
      Date of Filing: 5th December 1988
      [This served only as an international priority document for listed applications 3-4 & 6-10 below.]
    3. U.K. Patent Application No.: 8,920,580
      Date of Filing: 12th September 1989
      Grant as U.K. Patent No: 2,227,881
    4. European Patent Appln. No.: 89,311,559.2
      Date of Filing: 8th November 1989
      Published Specification No: 0369670
      Countries designated: Austria, Belgium, Switzerland, Germany, Spain, France, United Kingdom, Italy, Lichtenstein, Luxembourg, Netherlands and Sweden
      [Presently pending]
    5. U.S. Patent Application No.: 07/429608
      Date of Filing: 31st October 1989
      Grant as U.S. Patent No: 5,065,085
      Date of Grant: 12th November 1991
    6. U.S. Patent Application No.: 07/439,829
      Date of Filing: 20th November 1989
      Grant as U.S. Patent No: 5,288,336
      Date of Grant: 22 February 1994
    7. Japanese Patent Appln. No.: 1-299481
      Date of Filing: 17th November 1989
      [Presently pending]
    8. Canadian Patent Appln. No.: 2,003,318-5
      Date of Filing: 17th November 1989
      [Presently pending]
    9. Australian Pat. Appln. No.: 44771/89
      Date of Filing: 17th November 1989
      Grant as Australian Pat. No: 622,239
    10. Eire Patent Appln. No.: 3677/89
      Date of Filing: 17th November 1989
      [Presently pending]

    ************

    The following patent rights are currently in process. With the exception of the application identifying Thermodynamics Limited as applicant (sole inventor J. S. Strachan) all these are registered in the name of Dr. Harold Aspden as applicant and sole inventor. Dr. Aspden is empowered to negotiate rights under patents owned by Thermodynamics Limited.

    1. U.K. Patent Application No: 9,212,818
      Date of Filing: 17th June 1992
      Published Specification No: 2,267,995
      [Presently pending]
    2. U.K. Patent Application No: 9,302,354
      Date of Filing: 6th February 1993
      Applicant: Thermodynamics Ltd.
      [Presently pending]
    3. U.S. Patent Application No: 08/018281
      Date of Filing: 16th February 1993
      [Presently pending]
    4. U.K. Patent Application No: 9,321,036
      Date of Filing: 12th October 1993
      [Presently pending]

    The above is the status as at 18th July 1994.

    APPENDIX II

    ‘Solid-State Thermoelectric Refrigeration’

    [This is the text of a paper submitted to IECEC by H. Aspden and J. S. Strachan, a summary version of which was presented in person by Dr. H. Aspden at their 28th Intersociety Energy Conversion Engineering Conference held in Atlanta, Georgia, U.S.A., August 8-13, 1993.]

    This paper reports progress on the development of a new solid-state refrigeration technique using base metal combinations in a thermopile.

    Thermoelectric EMFs of 300 µV per degree C are obtained from metal combinations such as Al:Ni, assembled in a thermopile of novel structure. By providing for thermally driven Thomson Effect current circulation in loop circuit paths parallel with the temperature gradient between two heat sinks and also for superimposed transverse current flow driven through a very low resistance path by Peltier Effect EMF, an extremely efficient refrigeration process results.

    With low temperature differentials, one implementation of the device operates at better than 70% of Carnot efficiency. It has the form of a small panel unit which operates in reversible mode, converting ice in a room temperature environment into an electrical power output and, conversely, with electrical input producing ice on one face of the panel while ejecting heat on the other face.

    An extremely beneficial feature from a design viewpoint is the fact that the transverse excitation is an A.C. excitation, which suits the high current and low voltage features of the thermopile assembled as a stack within the panel.

    A prototype demonstration device shows the extremely rapid speed at which ice forms, even when powered by a small electric battery, and, with the battery disconnected and replaced by an electric motor, how the ice thus formed melts to generate power driving the motor.

    The subject is one of the two innovative concepts which were the subject of the paper No. 929474 entitled “Electronic Heat Engine” included in volume 4 of the Proceedings of the 1992 27th IECEC.

    The technology to be described is seen as providing the needed answer to the CFC gas problem confronting refrigerator designers. From a conversion efficiency viewpoint this device, which uses a solid-state panel containing no electronic components and a separate solid-state control unit which does contain electronic switch and transformer circuitry, outperforms conventional domestic refrigerators. Since it has no moving parts and contains no fluid, its fabrication and operational reliability promise to make this the dominant refrigeration technology of the future.

    However, the scientific research and development of the underlying principles have a compelling interest and pose an immediate challenge inasmuch as recent diagnostic testing has pointed to a feature inherent in the prototype implementation that has even greater promise for future energy conversion technology.

    This paper will address the subject in two parts. Firstly, the prototype will be described together with its performance data. Then, the ongoing development arising from the new discovery will be outlined.

    General Operating Principle

    The research was based on the use of a commercially available dielectric sheet substrate which had a surface layer of aluminium bonded to a PVDF polymer film by an intermediate layer of nickel. This gave basis for the idea of applying a temperature differential edge-to-edge to promote thermoelectric current circulation by differences in the Peltier EMFs at the opposite edges of the film.

    However, the nature of this material, which was intended for use in a piezoelectric application and so had a metal surface film on both faces, gave scope for crosswise A.C. excitation, as if it was a parallel plate capacitator. Of interest to our research was the question of how the transverse A.C. flow of current through the bimetallic plates would interact with the thermoelectric current circulation.

    Our finding was that the underlying D.C. current circulation which tapped into the heat source thermoelectrically was affected to an astounding degree once the A.C. excitation was applied. Whether we used frequencies of 500 kHz or 10 kHz, the thermoelectric Peltier EMF generated by the Al:Ni thermocouple was of the order of 300 µV/oC, which was 20 times the value normally expected from D.C. current activation.

    It may be noted that, with the thermoelectric aspect in mind, the PVDF substrate film used was made to order, being specially coated with layers of nickel and aluminium to thicknesses of the order of 400 and 200 angstroms, respectively. This was intended to provide a better conductance matching for D.C. current flow in opposite directions in the two metals, it being optimum to design the test so that heat flow from the hot to the cold edges of the film would, by virtue of the Thomson Effect in these respectively electropositive and electronegative metals, suffice to convey equal currents in the two closed path sections without necessarily drawing on the transversely-directed Peltier EMF action.

    It was hoped that the latter would contribute to the A.C. power circuit by a push-pull oscillatory current effect whereby heat energy and A.C. electric energy would become mutually convertible.

    A full explanation of the commutating effect obtained by combining matched current flow of the transverse A.C. and the in-film circulating D.C. is given elsewhere (Aspden and Strachan, 1990 and, Aspden, 1992). However, Fig. 1 may suffice to represent schematically the functional operation.

    Fig. 1. Thermoelectric Circuit

    Fig. 1(a) shows how bimetallic capacitor plates separated by dielectric substrates are located between hot (T’) and cold (T) panel surfaces with electrical connections at the sides of the panel. Some of the plates are floating electrically, being coupled capacitatively in series, whereas the connections linking an external circuit through an SCR oscillator switch circuit form a parallel-connected capacitor system.

    Fig. 1(b) shows how D.C. current circulates in two bimetallic plates with a matching superimposed transverse A.C. current.

    Fig. 1(c) applies when the A.C. current flow is in the upward direction.

    The point is that, in alternate half cycles of the A.C., the current flow operates to block the D.C. flow at one or other of the thermocouple junctions whilst segregating the Peltier heating and cooling on their respective sides of the panel.

    This has several very interesting consequences.

    1. It is found that the Peltier EMF is directed into the A.C. circuit, which being transverse to the thin metal film, is a low resistance circuit with high but virtually loss-free capacitative impedance.
    2. By diverting the electric power generated thermo-electrically, the D.C. current flow in the planes of the metal films was virtually exclusively that of heat-driven charge carriers. The current was sustained by the normal heat conduction loss through the metal and so did not detract from thermoelectric conversion efficiency by drawing upon the generated electric power.
    3. Thirdly, and most unexpectedly, it was found that the current interruption precluded the formation of what we termed ‘cold spots’ at the Peltier cooled junctions. These latter spots arise in any normal thermocouple owing to concentrations of cold by Peltier cooling in a way which escalates so that the junction crossing temperature of a current is very much lower than that of the external heat sink condition. This stifles the thermoelectric power in the D.C. thermocouple and it was our discovery that the cyclic interruption of the flow by the transverse excitation technique accounts for the transition to the very high 300 µV/oC thermoelectric power. The latter has been observed consistently in all three prototypes built to date and in diagnostic test rigs using the Al:Ni metal combination.
    4. Fourthly, however, the eventual testing of operative devices, though performing overall within Carnot efficiency limitations, awakened special interest because there had to be something most unusual about the temperature profile through the device if the best performance measured was to be bounded by the Carnot condition.

    Our research is now casting light upon that latter aspect and may herald a major breakthrough in energy conversion technology generally. However, even without the latter, the technology as developed to date does already justify commercial application in refrigeration systems and that is the primary focus of this paper.

    Development History

    The project has been slow to progress from its inception. One of us, Edinburgh scientist, J. S. Strachan (formerly with Pennwalt Corporation) assembled the device as a small flat module with 500 layers of bimetallic coated PVDF film. It was formed in a 20 by 25 series-parallel connection array which was a design compromise to enhance the capacitor plate area, whilst matching the A.C. excitation voltage and the current rating to the switching circuitry and dielectric properties of the PVDF.

    The device performed remarkably well when first tested, without requiring transitional stage-by-stage development to overcome problems. This had the effect of putting in our hands an invention which worked better than we had a right to expect but left us at the outset not knowing precisely how the different elements of the device were really contributing to the overall function.

    More important, however, though the thermoelectric operational section of the device was at the heart of the action, the implementation which used the PVDF dielectric and a capacitative circuit posed problems that were seen as formidable but yet were only peripheral to the real invention. There was also some doubt as to whether the properties of the PVDF had a direct role in the energy conversion. There was difficulty in planning in cost terms the onward scaling-up development, owing to the perceived problems of switching high currents at the necessary voltage level and frequency.

    Commercial pressures and the limited resources involved in what became a privately sponsored venture to develop the invention, combined with the barrier posed by the switch versus thermoelectric design conflict, halted R & D and led, sadly, to the project falling into a limbo state. This was until interest was aroused by the publication in the latter part of 1992 of the above-referenced 27th IECEC paper (Aspden, 1992) and by the article in Electronics World (Aspden 1992).

    Sponsorship interest in the R & D concerning heat-to-electricity power conversion has now revived, led also by a demonstration made possible by the building of a third prototype which incorporates 1,000 PVDF substrate thermocouple capacitor plates and which provides the following test data.

    Refrigeration Performance Data

    All three prototype devices built to date exhibited a remarkable energy conversion efficiency. They all operated with different switching techniques and different design frequencies.

    The first prototype was dual in operation in that it was bonded to a supporting room-temperature heat sink block and the application of ice to its upper face resulted in the generation of electricity sufficient to spin an electric motor. Conversely, the connection of a low voltage battery supply to the device resulted in water on the upper surface freezing very rapidly.

    Had this first prototype been assembled the other way up it would have been easy to use calorimeter techniques and measure heat-electricity conversion in both operational modes. As it was, an attempt to chemically unbond the device from the heat sink resulted in corrosion damage which destroyed the device.

    The second prototype was built, not for self-standing dual mode operation, but expressly to test the heat to electricity power generation efficiency with variable frequency. It was not self-oscillating and, as it did not function in refrigeration mode, it offered no test of refrigeration efficiency. It gave up to 73% of Carnot conversion efficiency in electric power generation with room temperature differentials of the order of 20o C.

    The recently constructed third prototype is superior in its electronic switching design and works well in both electric power generation and refrigeration modes.

    There is, however, a circumstance about its operation which means that, for this particular demonstration prototype, according to its intrinsic magnetic polarization state, it works more efficiently in one or other of its conversion functions. This particular third prototype operated with higher Carnot-related efficiency in the electric power generation mode than in the refrigeration mode. Also, for the same reasons, and an additional factor concerning the power drawn by the electronics and impedance matching internal load circuitry, the overall external efficiencies are very much lower than can be expected in a fully engineered product implementation.

    The refrigeration performance data presented below is, therefore, a worst-case situation and will, without question, be improved upon in the months following the date when this text is prepared.

    The device included an SCR switching circuit which was self-tuning and ran as an oscillator powered from electricity generated from melting ice in power generation mode or drawing on a battery supply in the refrigeration mode. However, the power taken up by this circuitry was factored into the overall performance, meaning that the thermoelectric core of the device had to be functioning at higher efficiency. Because the electric demands of the circuit were high in relation to the small demonstration thermoelectric core unit to which it was coupled.

    The active heat sink area of the device was about 20 sq. cm and a typical test involved a frozen block of 6 ml of water. A test performed after the lower heat sink had settled to a temperature of 25.6o C involved pressing the block of ice in a slightly melting state onto the upper heat sink with a polystyrene foam pad. The output voltage generated was fed to a 3 ohm load. It took 9 minutes for the ice to melt, during which time the measured output was a steady 0.67 V. These data show that a heat throughput of 3.7 watts generates electric power of 0.15 watts with temperatures for which Carnot efficiency is 8.6% This indicates performance overall of 47% of the Carnot value.

    It is noted that the 73% value obtained with the second prototype applies to a device which did not incorporate an oscillator demanding power but had simple electronic switching controlled by, and drawing negligible power from, an external function generator.

    To test the refrigeration mode, 3 ml of water was poured into a container on the upper surface of the device and a battery supply of 7.2 V fed to the SCR resonator with a limiting resistor now switched into circuit to protect the SCR during its turn-off. This resistor reduced the efficiency further. The circuit drew 6.3 watts and the water froze in 73 seconds.

    Since convection was minimal the water closest to the surface froze first and this immediately formed an insulating barrier which would mean operation thereafter at a significant subzero temperature at that heat sink during most of those 73 seconds. However, the overall temperature difference ignoring that temperature drop in the ice was 26o C, associated with a cooling power of 13.7 watts for an electric power input of 6.3 watts. This represents a coefficient of performance of 2.17 or 21% of Carnot efficiency. Cooling action at below minus 40oC has been demonstrated.

    Based on such worst-case data, which nevertheless applies to a simple solid-state device and compares well with the coefficient of performance data of domestic refrigerators, it can be assumed that the technology is capable of meeting production requirements of non-CFC refrigerators and domestic air conditioning equipment.

    Outlook following Breakthrough Discovery

    Diagnostic test work has proved that the device operation is independent from the piezoelectric or pyroelectric properties of the PVDF substrate used. Given that the action is truly that of the Peltier Effect, there should be current circulation in the bimetallic thin film productive of magnetic polarization. By detecting such polarization as a function of the applied temperature differential one can verify this situation.

    It is to be noted that our early research had shown that the thermoelectric EMF could, under certain circumstances, be greatly affected by the application of a magnetic field to the thermocouple junctions. Accordingly, the tests aimed at sensing thermoelectrically-generated magnetic field effects had a particular significance. Furthermore, we had some interest in the Nernst Effect by which a temperature gradient in a metal in the x direction, with a magnetic polarizing field applied in the y direction can develop electric field action in the mutually orthogonal z direction.

    It has become, therefore, a subject of research interest to examine how a bimetallic interface subjected to a transverse magnetic field and a temperature gradient in the interface direction affects the circulation of thermoelectric current between the metals.

    What we have discovered that is of great importance to the development of the solid-state thermoelectric refrigerator is that the setting up of a temperature gradient in the bimetallic interface plane between two contiguous metal films will produce a magnetizing field which readily saturates the metal if ferromagnetic. Thus the nickel film in the prototypes tested becomes strongly magnetized in one or other direction according to the direction of the temperature gradient.

    When this magnetic field is considered in the context of the Nernst Effect it is seen that it can lead to a transversely directed EMF governed by the product of the temperature gradient and the strength of the magnetic polarizing field. This transversely directed EMF then contributes a bias active in the individual metal and, being in the same transverse direction, supplements or offsets the Peltier EMF in the prototype implementations.

    Remembering then that the heating and cooling actions in the operation of the prototype devices are governed by current flow in metal which is, adjacent the respective heat sinks, in line with or opposed to the action of an EMF, one can see how something new has appeared on the technology scene of thermoelectricity. By using heat to generate current circulation, which in turn generates a magnetic field to provide ferromagnetic polarization, a powerful Nernst EMF set up in the metal can act as a catalyst in supplementing the junction Peltier heat transfer action associated with EMF across a metal interface.

    This may well be the action which accounts for the very high thermoelectric conversion efficiency we have measured.

    In order to quantify this as it may apply to the prototypes we have built, note that a 400 angstrom thickness of well-magnetized nickel subjected to a temperature drop of 20oC across a metal length of 2.5 mm, implies a Nernst EMF of the order of 6 mV across the 0.04 micron nickel thickness.

    Though small, this is significant alongside the Peltier EMF across a junction, but the really important point is that this Nernst EMF is set up in the metal and not across a metal junction interface. In that metal, owing to the free-electron diamagnetic reaction currents within the nickel and around its boundary, which offset in some measure the atomic spin-polarization of the ferromagnet, there is then scope for some very unusual thermodynamic feedback effects. Those diamagnetic reaction currents which are themselves powered by the thermal energy of the electrons have a strength related to the magnetic polarization and so exceed, by far, the thermoelectric current flowing across junction interfaces. The heating and cooling processes transfer power between the heat sinks in proportion to current times voltage and the in-metal action within the nickel could therefore generate very significant thermal feedback, thereby greatly enhancing the efficiency well beyond that of the normal thermoelectric bimetallic junctions.

    This action only results where one of the metals is ferromagnetic and the configuration of the device is such that an applied temperature gradient promotes internal circulation of thermoelectric current around a closed circuit able to develop a magnetic field in the nickel directed transversely with respect to the temperature gradient.

    Conclusions

    The exciting prospect for future development of refrigeration techniques centres on the possibility that the feedback process can be greatly enhanced by using thicker metal films. It is hoped, therefore, that the research reported here will soon advance to probe the limits of efficiency that are possible with this new solid-state refrigeration technology.

    In this connection the truly exciting prospect arises from the possibility that the efficiency barrier set by the Carnot criterion can be penetrated.

    To understand this, note that the Peltier EMF on the hot side of a thermocouple is proportional to the higher temperature T’ and that at the cooler side is proportional to the lower temperature T. For a given current circulation the heat energy extracted is proportional to T and the net input of electrical power is proportional to T’-T.

    This is the reason why the coefficient of performance has a Carnot limit of T/(T’-T).

    Now, if there is a thermal feedback action that is regulated by a Nernst EMF and we can contrive to assure that the forward transfer of heat arises from a uniform temperature gradient in the ferromagnetic metal, then the Nernst EMF is the same on both sides and the amount of heating on the hot side is, in theory, exactly equal to the amount of cooling on the other side.

    There is conservation of energy with negligible net energy input but heat transfer from the cold to hot heat sinks and this implies a very high coefficient of performance not temperature-limited according to the Carnot requirement.

    This, therefore, is the challenging possibility that looms in sight and is heralded by the rather fortuitous discovery of the surprisingly high performance characteristics of the Strachan-Aspden base metal thermoelectric power converter.

    The Strachan-Aspden device uses what the inventors see as conventional physics, albeit with the innovation of combining transverse A.C. excitation with D.C. thermocouple excitation. However, it does seem that in some curious way the device happens to have features which bring some new physics to bear. By producing a thermally-driven current crossing a strong magnetic field in metal the Lorentz forces on that current develop a transverse reaction EMF in that metal. The combination of that transverse Nernst EMF with a circulating current confined within the metal can, it seems, operate to transfer heat thermodynamically, working through the underlying ferromagnetic induction coupling in the metal. This is somewhat analogous to the way heat energy is somehow diverted into electricity in being routed between the hot and cold heat sinks in a conventional Peltier thermocouple circuit. It does, however, introduce new physics to the technology of refrigeration and offers great promise.

    References

    Aspden, H.; Strachan, J. S., European Patent Application No. 0369670, 1990.
    Aspden, H., SAE Technical Paper Series No. 929474 1992.
    Aspden, H., Electronics World, July 1992, pp. 540-542.

    APPENDIX III

    The Strachan-Aspden Invention: Operating Principles
    [October 1989 Report]

    The object of this Report is to merge a review of the status of the project at the time the primary research was abandoned in 1990 with an evaluation of the design options for taking the project forward. Appendix III, together with IV and V, comprise extracts taken from an earlier Report dated 23rd October 1989 and prepared when the project was most active. These provide background information.

    INTRODUCTION

    Imagine a panel fitted like a sheet of glass into a window frame but serving as a silent solid-state heat engine which uses electricity to cool the room in summer and heat the room in winter with the high efficiency of a heat pump. Imagine the same panel fitted into a glazed enclosure designed to trap atmospheric radiation to develop a temperature difference across the inner and outer surfaces of the panel and using the trapped heat to produce electricity.

    The Strachan-Aspden invention provides the technology needed to fabricate such a panel and brings with it a quite interesting challenge. This challenge is a design problem. The task is that of deciding between a mode of construction that has been tested in prototype form or one that needs some research in advance of development but should prove superior from a commercial viewpoint. The task is to scale down an internal operating voltage and increase internal current flow coupled with conversion to a pulsed d.c. mode of operation rather than having a resonant circuit sustaining a.c. oscillations through the dielectric of a capacitor.

    The R & D activity had just begun to address this problem when the Scottish small-business entrepreneurs who undertook initial development deserted the project as their other business ventures failed. This has meant that an invention which could make a major contribution in the effort to free the world from polluting energy technology became virtually dormant.

    The merit of the invention can be judged from one simple technical fact. Operating from a room temperature source of heat and melting ice the tested prototype device was able to generate electricity at close to the Carnot efficiency limit by a technology utilizing the thermoelectric power of base metals at a rating equivalent to 20 kw electrical power output per kg of metal in circuit. In a non-developed hand-fabricated form, the device performed at 73% of Carnot efficiency. This is not optimum performance and is far from exploiting the full design potential.

    The invention opens up a wholly new field of technological opportunity. It arises from a major scientific breakthrough which involves a totally unexpected discovery. In original conception the invention aimed to use the properties of a dielectric film as a barrier to heat loss by thermal conduction and the bimetallic coating on the film as a thermocouple circuit to convert heat into electricity. In reality it was discovered that the transverse oscillatory current excitation of a thermoelectric circuit produced an astounding effect on the thermoelectric power of the base metal combination.

    1989 RESEARCH PROGRESS REPORT

    “The plan during 1989 was for Strachan to engage in detailed research and onward development of the technology involved with a view to consolidating the patent position by year end.

    The research phase has not been without its traumas, essentially for two reasons. Firstly, there was a set-back in that to perform certain tests on the prototype device aimed at measuring efficiency at an elevated temperature it had to be detached from its heat base. Secondly, in attempting this using a chemical solvent to separate two parts, the chemical found its way into the main structure which, lacking in foresight on this possibility, had not been sealed against such contamination. This upset its operation; it was a lesson learned, but at that stage a set-back to the development plan. It was not then possible, without rebuilding, to really get the full measure of the performance properties needed to comply with the initial programme. The question at issue was one of controlling temperature differential on a sustained basis with measured heat throughput rather than monitoring a small piece of ice as it melted by sucking heat from the environment, some of which was being intercepted to produce electricity in transit through the device.

    Even before this set-back many experiments on components were performed to test operative features in isolation, with the early recognition that something totally unexpected was involved. A very substantial increase in thermoelectric EMF per junction, far in excess of reference data indication, had been achieved thanks to the particular operating technique adopted in the prototype. However, in spite of these progressive steps, the onward development necessitated a firm measure of the minimal operating efficiency of the basic device and, though the eventual products will be far easier to assemble, a small panel was made which closely conformed in design with the original but included certain modifications excluding what by then had come to be regarded as possibly non-essential features. This was a gamble, especially as the construction was very intricate and time-consuming when done by hand, with ongoing circuit tests during assembly to assure proper current distribution and uniformity of response. However, in the event, the device, once completed, did perform with equal or better results than the original version.

    Happily, in confirming the new design assumptions made during the first months of 1989, the tests on this second device proved to be a major step forward and justified the filing of a third patent application in September 1989.

    The second set-back proved how wise it was to have held back on early publication. The onward research investigations showed that what had been a primary design feature intended to block heat loss and so improve efficiency was not directly effective in that role, at least in the way we intended. Indeed, a fortuitous discovery had been made by proceeding on that assumption and the phenomenon involved had had the same effect, but not for the reason first believed. Instead of physically obstructing heat flow through the device, as had been intended, the operative technique actually converted almost all the heat into electricity before it reached the point of no return and so allowed very little to cross by thermal conduction and so escape as waste.

    It was only after this discovery was made and an understanding reached concerning the process involved that it became possible to begin to consider disclosing to the scientific community, not just what had been achieved, but why it works so well.

    This disclosure is being made now that the initial applications for foreign patent rights have been registered and the purpose is expressly to attract interest from those who have the resources to help in the development of this new energy technology. It is only by such shared action on an equitable commercial basis that the benefits of the Strachan-Aspden invention can make their full contribution in helping to reduce the world’s energy pollution, whilst conserving the chemical qualities of fossil fuel resources for future generations.”

    The above text, quoted from the 23rd October 1989 report was prepared as a confidential document. The sponsors used the report to try to attract investment in their overall business interests and shortly thereafter ceased to fund R & D on this invention. Apart from initial costs of overseas patent applications, the funding that had been provided had been mainly that needed as salary by Scott Strachan whilst involving him as a consultant on other projects. As yet, therefore, this important energy invention has not had the benefit of serious development funding.

    The research effort up to October 1989 had concentrated on simplifying the assembly of a prototype test device using the bimetallic coated film which could also serve as a capacitor dielectric. The immediate objective was to measure the heat-to-electricity energy conversion efficiency and explore the design criteria involved. The inventors were, however, mindful that the principles of operation of the device did not really depend upon capacitative operation and the current limitation which that implied. It was deemed possible to extend the technology to structures which involved an all-metal through-circuit for electrical power and some plans were made for building such all-metal structures for bench testing. Had the research been active in 1990 this alternative would have been thoroughly tested so that a choice could have been made as to the best mode of implementation in a production assembly.

    It is noted that no formal product design proposal, with costing that could be used in a business plan, was drawn up in the 1989 period. Strachan was engaged on the preliminary functional testing to assess the performance and determine the optimum techniques and choice of materials. Without this information, one could not price either the market value of a product or its manufacturing cost. Even now, product costing is not really possible until the through-metal-circuit R & D investigations have been completed. The fact that a 20 kw rate of electrical power generation can be delivered by 1 kg of metal, drawing on a temperature differential of 20o C, is the best indicator that it must be possible to build an operational unit that can be costed low enough to justify a very large sales volume. The real question now concerns the best configuration of the metal used and the best choice of metals.

    THE STRACHAN-ASPDEN INVENTION

    [The section in quotes is copied from the 23rd October 1989 report]

    “The following is a technical description of the principles underlying the Strachan-Aspden invention written on the assumption that it would form the basis of a lecture by Harold Aspden to an audience who would later witness a demonstration of the operational device by Scott Strachan.

    “Before outlining the technical nature of our invention there is one very significant point that I think is worth registering at the outset. The test device on which our company was founded used the thermoelectric properties of contact between two base metals, aluminium and nickel, to produce electrical power from a low grade heat source. A temperature difference of 20 degrees relative to room temperature was sufficient to produce a steady power output of one fifth of a watt per cubic millimeter of metal in the thermocouple circuit. Scaled up, that is 20 kw per kg of metal. It did this with an efficiency that was well above 50% of Carnot efficiency for this temperature range. This is as good as internal combustion engine performance where the fuel burns at more than 2,000 degrees.

    This is an invention which should have been made 50 years ago as part of the solution of the electronic age. As to the patentable merits of the invention, it has been said that even a simple invention can be judged highly if ‘a long felt want’ is satisfied. No one can deny that we need a breakthrough in the pollution-free energy field and what I have to disclose is not quite so simple.

    The device is essentially a flat panel that can be fitted like a window or used as a heat exchange interface in an engineered installation to convert heat energy into electricity or to use electricity to cool one face of the panel and heat the other face.

    It is simply a panel with an electric supply lead. All that there is between the two faces of the panel is a laminar structure of metal with some insulation, together with a small electrical transformer and an electronic control unit connected to the supply lead via a switch.

    What is special, however, and what causes this device to be a revolutionary breakthrough in energy technology, is governed by a combination of two special features. These we have called:

    1. DYNAMIC EXCITATION FEATURE
    2. TRANSVERSE COMMUTATION FEATURE

    There is also a third feature which has been used in the prototypes to enhance efficiency even further, but which will only be used in very special products. This is termed:

    1. THIN FILM ENHANCEMENT

    Basically, we are talking about a thermoelectric system using either the Seebeck Effect or its converse, the Peltier Effect. By connecting different metals in an electrical circuit and positioning the respective junctions on the hot or cold side of the panel, the passage of D.C. current is related to the thermodynamic effects. Energy can be converted in this way, as is well known, but not, until now, with an efficiency that has such overwhelming implications in the field of energy technology.

    The thermocouple working in Seebeck mode operates to extract heat from one junction and inject heat at the other junction. The balance of energy is electrical in the sense that an EMF or voltage is set up at the cooled junction and this can deliver output power in the electrical circuit, provided it is smaller than the back EMF or reverse voltage at the heated junction.

    In efficiency terms, the operation is governed by the fact that the heat absorbed or produced at a junction is proportional to the junction temperature measured on the absolute scale, that is referenced on -273 degrees centigrade. Therefore, if one junction is at -3 degrees centigrade (270 K) and the other at 27 degrees centigrade (300 K), we can produce 300 units of electricity from the cooling effect at the hot junction but have to give back 270 units of electricity by heating the cold junction. The net gain is electricity, in theory, could be 30 units of electricity for the price of a 270 unit throughput or 300 unit input of heat energy for these low temperature conditions. These high numbers of heat energy units should not be regarded as energy waste. They relate to what is called ‘enthalpy’, which is a measure of heat content referenced on 273 degrees centigrade below zero and even ice has an enormous heat content on this basis of reference.

    What has just been described is the so-called Carnot efficiency. It is 10% for the 30 degree temperature differential considered. It works either way, in the sense that if electricity is supplied rather than produced, the input of 30 units of electricity can cause a transfer of 270 units of heat from the outside temperature source at -3 degrees and heat a room to 27 degrees. This is the Peltier mode of operation and it provides a tenfold gain on the use of the electricity in an electric heater, assuming full Carnot efficiency. Operating at 50% of Carnot efficiency, a 10 degree heating can be achieved with only 7% of the power needed by an electric convector or radiator.

    The reason we do not see such Peltier heat pumps used on a large scale for domestic heating or power generation purposes is, very simply, that it has not been possible to achieve an adequate level of performance relative to the Carnot limit.

    Technically, the obstacle has been the need to find materials which can be used to form thermoelectric junctions having a high Peltier coefficient. This is the factor relating the power conversion at a junction with the amount of current passing through. It is measured in millivolts at room temperature. The dilemma facing this technology is that if base metals such as copper, iron, aluminium etc are used to form junctions, the EMFs involved are very small. However, the electrical conductivity is good and this helps to reduce losses. Unfortunately, in such metals good electrical conductivity goes hand in hand with good thermal conductivity and then we lose heat by leakage through the metal circuit between the hot and cold junctions. For base metals this has been seen as a ‘no win’ situation, because efficiencies of the order of 1% of Carnot efficiency are representative of practical performance.

    For these reasons, the attentions of the last half-century have concentrated on special metals, alloys, and semi-metals or semi-conductors. The price paid for accepting poor electrical conductivity of perhaps one thousandth that of copper has been rewarded by a much reduced thermal conductivity and a very much increased thermoelectric power. The EMF involved is typically in excess of 200 microvolts per degree with a Peltier coefficient of 60 millivolts at room temperature. Such devices are useful for special applications, where small current throughput and low efficiency are of no consequence, but their general use as Peltier heat pumps or electric power generators has been limited.

    A typical state-of-the-art power generator using junction materials formed from alloys of bismuth, tellurium, selenium and antimony has a design specification that recognizes a maximum operating efficiency of 22% of Carnot when operating with a high temperature differential of 300 degrees using a source at 600 K. The electric power produced, assuming perfect accord with the design specification, is of the order of 0.1 kw per kg of metal used to form the thermoelectric junctions.

    Practical applications depend upon the energy throughput rate as well as efficiency and what is being offered by the Strachan-Aspden technique is so far ahead of state-of-art technology on both these counts that one must wonder how the technology could have gone so far adrift in missing the real potential of the Seebeck effect.

    Some words from the book ‘Direct Energy Conversion’ by Professor Stanley Angrist bear upon this:

    “At the time of Seebeck’s work, the only devices available for producing electric current were extremely weak electrostatic generators. Fifty years passed before steam engines drove electromagnetic generators. It was, undoubtedly, electromagnetism that caused succeeding generations of physicists and engineers to lose interest in the curious effects of thermo-electricity. The only widespread use of the effect was in the measurement of temperatures by means of thermocouples. It is difficult to say how the history of electrical engineering and electronics would have developed had Seebeck’s discovery been widely employed.”

    Those researching this field today seem to have been attracted by the empirical discovery of new materials and have gone astray in not researching the basic question why metal junctions have such low thermoelectric power. This is very curious, bearing in mind that classical thermodynamics theory tells us that the theoretical power of base metal combinations is of the same order as that of these special materials.

    I must admit, however, that though, with hindsight, we can bring this problem into focus, we did discover the solution only when we were performing diagnostic tests on our principal prototype. In short, we had built something that worked too well and we were wondering why.

    The point rests on the question of whether the metal used increases in electrical conductivity or decreases in conductivity as temperature increases over the operating range. In base metals conductivity decreases with increase in temperature. This means that at the cooled junction the decrease in temperature improves conductivity. Now, if the electric current flowing through the junction is uniformly distributed this will simply mean that the junction has a uniform cooling across its interface. However, if, as occurs in electrical discharges in gases, the flow tends to be in short-lived filamentary surges, there is the real possibility that a current could develop a non-uniform pattern of cooling. A current flow concentrated at one position would form a ‘cold spot’ in the junction interface. The electrical conductivity there would increase and so the current would favour that path of least resistance and become locked on the cold spot. This could drive the temperature so low that the effective temperature governing Carnot efficiency is not what we see from the external actions.

    In other words, owing to the increase in electrical conductivity with drop in temperature, the thermoelectric power falls far below the theoretical potential of the metal junction. There is therefore an enormous loss of efficiency when base metals are used in thermocouples in what has been conventional technology.

    Why does this not affect the special materials as well? The answer is that such materials do not have the same temperature characteristics. The p-type alloy bismuth-telluride (25%) with antimony-telluride (75%), and n-type alloy bismuth-telluride (75%) with bismuth selenide (25%) have, for example, electrical conductivities which reduce the temperature if operated above 300oC. Such a temperature characteristic means that cold spots cannot form. Therefore, if we want to use base metals, with high capacity for delivering current, the only way we can hope to get high efficiency is by somehow preventing the cold spots from forming in these materials. This is exactly what we achieve by the DYNAMIC EXCITATION FEATURE. Its effect is to increase the thermoelectric power of an aluminium-nickel couple from 17 microvolts per degree to a value well in excess of 300 microvolts per degree. Since this factor operates as a squared effect, because it drives proportionally more current and puts proportionally more voltage behind it, the electric power becomes hundreds of times greater than expected on conventional design criteria. This, therefore, is a major advance because it allows us to use basic metals with high capacity for delivering current, rather than expensive compositions with very limited energy throughput capacity.

    What is the DYNAMIC EXCITATION FEATURE? In simple terms, this is a technique by which, instead of causing a steady D.C. current flow through the junctions, we interrupt the flow several thousand times per second in such a way that the current flow through the cooled junction relocates rapidly and before the non-uniform temperature or cold spot condition can develop.

    The advantage is that we get the kind of thermoelectric power (i.e. voltage) from aluminium-nickel junctions that is available from bismuth-telluride, but, for comparable dimensions, the higher electrical conductivity of the base metal device allows more than one hundred times as much active power (wattage) to pass through. This takes us well forward technologically, but the TRANSVERSE COMMUTATION FEATURE which will now be described advances performance even further, so far in fact that we can trim back our design objectives on efficiency to simplify the manufacture and so reduce the cost of this technology.

    The conventional design of a thermoelectric device involves having two distinct junctions between the two metals, one junction being at a higher temperature than the other. The metal between the junctions merely serves as a conduit for electric current and, unfortunately, provides a channel for heat loss by thermal conduction from the hot to the cold junction. Rather than trying to develop special materials which facilitate flow of electric current but obstruct heat flow, we followed another route. We also had in mind that a really good commercial device could hardly take in heat and produce electricity if the materials were not good conductors of heat. After all, the heat has to get into the device before it can be deployed into electrical form.

    Our device uses two metal layers which interface over the whole distance from the hot side to the cold side. We then set up a thermoelectric current which it drives around the closed circuit formed by the interfacing metal layers. We accept the full measure of heat conduction through the metal by allowing it to travel through the full length of the metal layers. However, note that the route taken by the heat is never further away from a junction interface than the thickness of a metal layer. This means that the heat has repeated opportunity to be effective in generating electric power as it progresses along the junction interface. This is a feature vital to success. Unlike the conventional thermopile where, once clear of the hot junction, the heat travels to the cold junction to be dissipated, we ensure that it has repeated ‘bites of the cherry’, as it were, en route to that destination, with the effect that very little even reaches the point midway where the current flow reverses. By ‘reversal’ is meant flow from metal B to metal A, whereas initially it was flowing from metal A to metal B.

    This feature has a remarkable effect on efficiency because virtually all the heat supplied is converted into electricity. The $64,000 question, however, is how we intercept the electrical energy flow around the closed loop circuit formed by the two contacting metal layers and so gain access to that electricity before it is all dumped back into heat over the interface area where the thermoelectric current flow reverses.

    This is where the ‘transverse’ excitation aspect of our invention holds the key to a successful energy converter. As can be seen from Fig. 1, we stack bimetallic layer upon bimetallic layer to build a stack between a hot surface and a cold surface and the external current flow involves a transverse current flow through the whole stack.

    Fig. 1

    The point then to keep in mind is that at the interface between the two metals forming each layer there is a thermodynamic effect causing a voltage to act from metal A towards metal B and this voltage varies across the layer according to the local temperature. It is greater, the higher the temperature. Because of this there is an imbalance of voltage from point to point in the heat flow direction across the contact interface in each layer when a temperature differential exists between the side faces of the stack. This imbalance causes current circulation in the sense shown in Fig. 2, where one layer is presented in enlarged form.

    Fig. 2

    All this does is to cool one side and heat the other, with the result that the metal conducts heat from the hot side to the cold side.

    However, now suppose that we provide a channel for transverse current flow up or down the stack. This means current flows transverse to heat flow but, in this layered arrangement, it augments or opposes the thermoelectric current as it traverses a junction, depending upon the direction of flow of the transverse current. The channel for this transverse current is assured if there is good interface contact between all metal layers in a stack formed by metals A, B, A, B, A, B etc in sequence. Owing to the symmetry of the system a current travelling right in metal B will have to overcome the same potential barrier or back EMF at the cold junction whether it goes up or down the stack. However, we cannot have some contributing to transverse current flow by going up the stack in one part and elsewhere having some going down the stack. Either the current all goes up or all goes down or there is no transverse current flow at all and the thermoelectric current flow is everywhere confined to its own bimetallic layer.

    The current will take the path of least resistance, or will it? If there is an external resistive load connected in the transverse current flow path, then the easier route for current will be the closed circuital track shown in Fig. 2. Some small amount of current should flow either up or down the stack, because the external circuit offers a supplementary route for current. However, this will not give us scope for causing cyclic interruption of the primary junction current, nor will it give access to any real power output. Indeed, without the dynamic excitation, the voltage driving the circuital current in Fig. 2 is very low. Nevertheless, the circuit is a bistable system and how it behaves when relying solely on the thermoelectric voltage produced by the heat input is not the same as its response when a voltage surge up or down the stack governs the action.

    Given a trigger effect which causes a transverse current surge up or down the stack, the junction current can be interrupted by a fast cycling switch in the external circuit and, once this happens, the full high powered thermoelectric action comes into effect, but this is a condition only effective if the transverse current is strong enough to exceed the normal steady state junction current. Given some intrinsic inductance or capacitance to sustain transverse voltages which carry the action through the zero current transient states, the device can become locked into the dynamic excitation mode to deliver an electrical current powered by the full thermoelectric action. In this mode the current flow is represented by the snaking flow shown in Fig. 3.

    Fig. 3

    The device actually works and exhibits extremely high efficiency in converting heat energy into electrical power output. Indeed, the capacitative versions of the device which have been constructed use bimetallic layers less than 0.1 micron in thickness (one micron is a millionth of a meter) and 300 such layers of one square cm area interleaved with 28 micron thick dielectric could generate 300 milliwatts of electrical power using just over one calorie per second of heat input at 40 degrees Centigrade and output at 20 degrees.

    This is quite remarkable, bearing in mind that even these temperatures and their differentials are so low. It is even more remarkable when one realises that the power generated is at a rate in excess of 20 kilowatts per kilogram of metal used to form the thermoelectric circuits. This capacitative device does, however, make use of the enhanced electrical conductivity of thin films, which accounts for the very high efficiency obtained.

    To bring the design parameters into perspective it is useful to consider a formula for the figure of merit Z normally applied to thermoelectric systems. This is presented in Fig. 4.

    Z ‘ βα2σγ/K

    FIGURE OF MERIT Z
    TRANSVERSE COMMUTATION FACTOR β
    THERMOELECTRIC POWER (VOLTS/DEGREE) α
    SPECIFIC ELECTRICAL CONDUCTIVITY (MHO-CM) σ
    THIN FILM ENHANCEMENT FACTOR γ
    THERMAL CONDUCTIVITY (WATT-CM) K

    Fig. 4

    This formula, when multiplied by the operating temperature, in absolute degrees Kelvin (say, 300 at room temperature) is a measure of the potential electric power generated as a ratio of the heat conducted from the hot junction to the cold junction and so wasted. This assumes operation with a low temperature differential and allowance has to be made for the duality of the metal paths, which are in parallel for heat flow and in series for electrical current flow. This tends to reduce the ratio by a factor of 4. Also, the potential electric power output depends upon the load resistance as related to the internal resistance of the device.

    All in all, therefore, to build a viable thermoelectric power converter the Seebeck coefficient α has to be as high as possible. The Strachan-Aspden devices tested so far are offering α values in excess of 300 microvolts per degree centigrade using base metals for which the bulk specific electrical conductivity σ is in excess of 100,000 mho-cm and the specific thermal conductivity about 2 watt-cm. On these figures, at the temperature of 300K, the formula gives near unity ratio of electrical power to thermal power lost.

    However, the Strachan-Aspden technique earns its qualities by virtue of the factor β and also the factor γ. These are the coefficients representing the effects of the transverse commutation feature and the thin film feature, respectively. Each of these factors is a unit of magnitude giving ten-fold benefit.

    The electrical conductivity of a thin film of a few hundredths of a micron thickness can be more than 10 times greater than the bulk value. Such film was used in the main prototypes tested. We did not measure the factor γ, because the bimetallic thin film material was available commercially with a rated electrical resistivity of 0.1 ohm per square. It comprised thin film layers of aluminium of 0.02 micron thickness and nickel of 0.04 micron thickness. Knowing the bulk values of σ as given by reference books, the value of γ was estimated as being about 10 from these data.

    Concerning the factor β, this represents the repeated ‘bites at the cherry’ effect as heat gets repeated opportunity to convert into electricity as it is conducted into the device. For conventional thermocouples where the temperature drop between junctions is linear, β is unity. However, we had a system in which the temperature was changing much as depicted in Fig. 5.

    Fig. 5

    The dotted line represents the linear temperature profile and the curve the profile we are exploiting. β is a measure of the conventional temperature gradient of the dotted line as a ratio to the minimal temperature gradient midway between the junctions. The latter is a measure of the heat energy going to waste and the much larger gradient of the full curve at the hot junction is a measure of the heat energy entering the device before conversion into electricity. Because the midway gradient is much lower than the linear case, we have a high β factor and because it is very much lower than the input temperature gradient we have a very efficient device capable of taking in far more heat than a conventional device.

    I believe that I have said enough to outline why the Strachan-Aspden thermoelectric power converter works so well. The ongoing research relates to how far we can compromise on the thin film factor γ with a view to using thick metal layers and relying exclusively on the β factor of the transverse commutation feature. Unquestionably, our primary products will use the dynamic excitation feature to get the advantages of power from base metals, but we foresee also the use of special metals as well, coupled with designs based on the β factor.

    I should like to end by describing how, even before we filed our first patent application or got involved commercially, we got a measure of the β factor applicable to our first demonstration device. Very simply, we had built a small panel having metal faces and layers of thin metal film running from face to face but embedded in an insulating dielectric. Looked at in the direction of heat flow, the metal and dielectric were side-by-side, with the metal presenting a cross-section amounting to about one five hundredth of that of the insulator. Such a device, therefore, was not, in thermal conductivity terms, a through-metal conductor.

    To get a measure of its properties, we put an ice cube of standard size on the upper metal face and attached the lower face to a commercial heat sink base at room temperature. The ice melted, partly by heat absorbed by air convection from above, partly by heat loss by thermal conduction through the intervening insulation and partly by heat conduction through the metal. It took in excess of 20 minutes to melt completely. This was with the output leads from the device unconnected, that is, on open circuit. I knew from a test at home that such an ice cube on a metal work surface took about 5 minutes to melt and took 30 minutes on a Formica-topped kitchen table. The point of interest then was that when the same sized ice cube was used on the device with the output leads connected to an electric motor or a resistor load, the time of melting reduced to between 3 and 5 minutes. The motor stopped running, of course, soon after the ice had completely melted, but the message from these very simple measurements was clear testimony of a very high internal efficiency in the generation of electricity from heat. There being no independent electrical power supplied to the device and the ice being the only perturbing influence, the connection of the electrical load had diverted more than 80% of the heat energy around the wired load circuit and this was via a capacitance. That 80% and more of power was electrical power and most of the other 20% of heat conduction was seemingly unnecessary loss because much of it was due to extraneous convection or heat conduction through what was unnecessarily thick dielectric insulation.

    It was from such a very simple test that we knew the β factor had to be 10 or more, but we were carried along by that empirical performance and, may I say, that was so high that, for a time until we could make more precise measurements, mainly of temperature, we thought we had achieved the impossible, by going above 100% of Carnot efficiency.

    As it is today, in our best performing thin film prototypes we still have difficulty measuring just how close we are to the ultimate Carnot efficiency.

    Concerning thick film designs, which do not have the high thin film conductivity feature, our research is progressing in sustaining the thermoelectric voltages achieved by the DYNAMIC EXCITATION FEATURE and exploiting the β gain by the use of the TRANSVERSE COMMUTATION FEATURE.”

    ******************************************

    Concerning the latter comment about thick films, this was a theme which this author (H. Aspden) urged at the time (October 1989), but this was shortly before the R & D funding ceased and the test facilities closed when the other business interests of the sponsors failed.

    This author did, independently, seek to experiment with a small test unit in which thin nickel plates plated on both sides with copper were bonded into an integral assembly for resistance testing. It did not function as hoped when subjected to a small temperature differential.

    However, this was a first attempt at a time when thoughts were on the collapsing sponsorship and it later became evident that the test external circuit facility used lacked the necessary current capacity to cope with current oscillations at the requisite frequency and strength.

    Even so, in this latter regard, the nickel sheet material used in these experiments would, with its copper plating, have posed the same problem that has now (1994) been encountered in a much larger test device, namely the fact that a multiple bimetallic interface in a series circuit can, without an initial temperature gradient to prime the action, avoid the Thomson Effect current diversion and thereby generate junction heating that is not segregated from the Peltier cooling.

    The author’s current research which will be described in Energy Science Report No. 3 is now directed along a track which aims to overcome these particular problems in an effort to avoid transverse current excitation through a dielectric medium whilst constraining heat flow to be transverse to the current and EMF attributable to the Nernst Effect.

    APPENDIX IV

    The Strachan-Aspden Invention: Test Results
    [October 1989 Report]

    This report is a copy of the TEST REPORT presenting the status of the Strachan-Aspden Energy Converter project on 19th October 1989, as included in the 23rd October 1989 document.

    Introduction

    The device tested was built expressly to verify design criteria, essentially to check that we were right in eliminating certain design features present in the first demonstration device. The tests confirm our theoretical assumptions.

    In order not to alter too much in this stage of development, the same commercial bimetallic coated dielectric was used and the same physical dimensions of the thermocouple junction interfaces. These are not optimum, particularly concerning thickness of metal layers and possibly concerning choice of the actual metal combination as well as the length dimension between the thermal surfaces.

    However, whereas the operating frequency was 500 kHz with the first device, the present device runs at 18 – 25 kHz, depending upon load and voltage output rating. Such frequencies impose design constraints, which will not be a problem if we can build a non-capacitative device now predicted as a possibility using the verified design principles.

    The primary objective of the tests reported here is not to see whether the efficiency of the device assures its commercial viability as it stands, because we can certainly design to achieve a far better power rating and a simplified technique of fabrication. The objective is to measure the efficiency of the device for operation over a moderate range of ambient temperature, with atmospheric, geothermal and waste heat in mind as energy sources.

    The measure of efficiency and study of factors affecting efficiency are vital to projecting commercial applications and designing products for manufacture, especially concerning the Peltier mode for refrigeration and cooling and also for conjecturing products which store electrical energy as heat and regenerate electricity. The use of plastic film as a substrate for the bimetallic layers has limited the temperature range of the particular device tested. Also, owing to the specific form of the electronic switch system built for the device, tests in the refrigeration (Peltier) mode did not prove viable for reasons in no way related to the device structure and measurements of efficiency of Peltier mode operation have been deferred.

    An overall performance figure, allowing for all circuit losses and output voltage transformation, which can be relied upon for conversion of heat to electricity with temperature differentials as low as 10 to 30 degrees Centigrade is 70% of Carnot efficiency.

    The Structure of the Test Device

    The device is constructed from 300 layers of 28 micron thick high dielectric constant plastic film as a substrate for sputtered junctions of two layers of metal, nickel and aluminium. Each layer had a width of 3 cm and a length of 0.25 cm, the width dimension and edge forming the surface interfacing with the heat exchange surfaces. The aluminium film was 0.02 micron thick and the nickel film 0.04 micron thick.

    These 300 layers therefore defined 300 junctions each having exposure to a hot and a cold face of the panel form of the assembled device. These were divided into 20 groups of 15, each group comprising 3 sub-groups of 5 layers. Each such sub-group is bounded by a layer of copper as an electrode for wiring the device into the chosen series/parallel configuration. Thus, in effect, there were 15 layers stacked to form a series capacitor unit and 20 such units were wired together to form a parallel connection of the capacitor units, ultimately having connection to an external circuit by two supply wires.

    The copper electrodes were narrower than the junction length to reduce their thermal conduction contribution to the heat path. The entire stack of 300 junction layers was bonded on to a ceramic powder composition base to give good heat coupling but to ensure electrical insulation from the heat sink base an upper aluminium sheet was bonded by an electrical insulating heat sink compound to the upper surface of the stack to form the other external heat surface.

    When a temperature differential is set up between these external heat surfaces there is a thermoelectric charging of each junction which contributes to the energy storage in the capacitor stack. Indeed, as a function of the temperature differential, the capacitor so formed has a greater effective capacitance than would be expected purely from calculation based on the dielectric constant and dimensions of the assembly. Typically, the capacitance can be of the order of 1.5 microfarad for this very compact assembly.

    The thermoelectric current acts to sustain the recharging of the stack as it is systematically charged and discharged by a fast operating switch unit. For the test to be described this unit comprised five electronically controlled switches operating in parallel expressly to ensure that there is a minimal loss of electric potential, inasmuch as the EMF of a mere 15 junctions was being switched. The control of this switch bank involved a frequency generator input of negligible power. Note that it was a specific feature of the first prototype test device that it included a self-activated oscillator for switch control powered by the electric signal generated by the device.

    The action involved, therefore, can be seen as one involving deploying thermoelectric power into the charging of the capacitor stack and then, as fast as possible having regard to the recharging speed, transferring the stored energy to an output circuit by a cyclic switching operation. Subject to the capacitative delays and charge storage aspects, the action can also be seen as one involving circulating thermoelectric currents in the bimetallic layers with a superimposed transverse current flow through the capacitor.

    A high Q transformer winding is intermittently connected to the stack via the switches. This presents a low impedance into which the capacitor stack drops its thermoelectrically acquired charge. The secondary winding of the transformer then transforms the resulting voltage to a value which matches well with the load, both to give suitable measurement voltages and also to ensure that the load seen by the stack has a sufficiently low impedance to draw out the charge quickly. Note that the device has its own internal resistance and the load resistance has to draw most of the power.

    Measurement Criteria

    The device tested is a flat square metal-faced unit which has a pair of electrical input/output leads. Given a temperature differential across its metal faces an electrical power output is available. In terms of the heat input, this electrical output depends (a) upon the internal design structure of the device (which cannot be varied as part of the test) and (b) upon the manner in which the load circuit is electronically controlled. The latter control, though as just described working successively to charge and discharge the capacitor form of the device, also implements what we term ‘dynamic current excitation’ and this greatly enhances the power output. The A.C. power supplied is converted to D.C. and smoothed for measurement. The performance depends upon matching the load with the device to secure optimum output.

    Basically the test to be reported is very simple. By feeding in a sustained amount of heat, controlled by electrically powering a small resistor in a liquid heat bath, the task is to reconvert some of that heat back into electricity as D.C. in an output load at a steady voltage and current. This will give basis for precision tests on power in and power out, but to assess the result obtained it is important to have a very good measure of the temperatures of the two metal faces. The temperature measurement poses problems, because, firstly, we must beware of any non-uniformity of temperature across the operative surfaces and, secondly, we must know the extent, if any, of any interference caused by the measuring device or probe.

    Earlier test results had been clouded by the problem that the bounding 1 mm thick aluminium plate was not able to buffer the heat distribution to assure a uniform temperature, given a concentrated heat source (electrically energized carbon resistors) on the external face.

    For this reason the initial measurements on the device described, which were unreliable in ranging from 50% to 100% of Carnot efficiency, according to test conditions, have been repeated using a stainless steel can containing water heated internally by a chain of four 10 ohm resistors. This can was specially built with a flat lower surface able to interface well with the heat surface of the device and was mounted thereon using heat conducting paste. Also, the whole structure was housed in a close fitting heat insulating container.

    The temperature measurements involved calibrated platinum resistance probes registered by a digital voltmeter and were from time to time confirmed with an alcohol thermometer.

    Peripheral Test Information

    It was not part of the test reported here to repeat certain experiments that were made in the earlier development stages. Nor could we measure directly the thermoelectric EMFs in the sections of the stack built into the device. During construction it was part of the discipline of the assembly procedure to test each part-assembly of five junction layers to verify insulation and be sure that it was performing with the power of 2 millivolts per degree Centigrade when subjected to dynamic excitation. Any that did not match the uniformity requirements were rejected. However, as will be seen, the test data do tell us that this thermoelectric EMF is at work in the operating device because the output EMF from a stack is measured and the voltage at the output terminals is roughly equal to the internally produced thermoelectric EMF times the measured efficiency relative to the Carnot limit. This also means that the current output as a measure of junction current relates by this total thermal power to the potentials active at the junctions and so gives a measure of the thermoelectric power in the device.

    This thermoelectric power is the crucial factor in our onward design of any products. It was 400 microvolts per degree Centigrade per junction pair in the above device and this applied to the thin film (0.02 micron aluminium on 0.04 micron nickel) assembly. The two metals had no intervening metal; they were vapour deposited. In contrast, earlier tests had shown that a stack of metal plates of iron and nickel of sub-millimeter thickness with soldered junctions gave a thermoelectric power per junction of 118 microvolts per degree centigrade with dynamic current excitation. Given that we can reasonably expect iron and aluminium to present similar thermoelectric action when forming junctions with nickel, the question we need to resolve is whether that gap between 118 microvolts and 400 microvolts is due to the thin film aspect in the vapour deposited case or the adverse effect of the intermediate solder in the thick film test case.

    The most important test data of interest, therefore, at this time and before products are designed and manufacture evaluated, are

    1. the actual efficiency for limited ambient temperature use of the device already constructed, and
    2. the thermoelectric power of a thick metal junction assembly with no solder connections.

    This report addresses the first of these issues and the next report will deal with our findings on the other question.

    The onward test programme must relate to the factors such as optimum film thickness and dielectric thickness in the vapour-deposited/capacitor system or thickness of a thick film version, optimum electronic design and excitation frequency as well as waveform profiles, choice of metals, structural dimensions (panel thickness) and electrical insulation/heat conducting spacing material at the interface of the external metal surfaces and the junction assembly.

    The Test Data

    These tests were performed independently by Scott Strachan and Harold Aspden during different periods in October 1989.

    The Strachan tests were performed between 10th October and 12th October. The test results obtained by Aspden on 17th and 18th are those listed in Tables II and IV.

    The test apparatus is as shown in Fig. 1.

    Fig. 1

    Application of heat energy is via the medium of heated (or cooled) water in a container on the upper heat exchange surface and use of a commercial heat sink base at room temperature replicated conditions which would apply to production devices. The cold underside can be considered to be at an even temperature because it is mounted on a massive heat sink with a recognized high heat dissipation capacity.

    The water heat sink provided a uniform temperature interface and this temperature was measured by a commercial platinum resistance temperature probe calibrated to give a measure of temperature via a digital voltmeter. A similar and separate heat probe was used to measure the temperature on the surface of the base heat sink.

    Owing to heat transfer through the device, albeit mainly via the electrical conversion route, as Peltier cooling occurs at one face and Peltier heating at the other, it is inevitable that the actual temperatures at the working interfaces of the device will be slightly lower than the hot temperature measured and slightly hotter than the cold temperature measured. This means that the true efficiency relative to Carnot will be just a little greater than that calculated using the measured temperatures. No allowance is made for this in the test data, because the temperature drops involved would be present in an engineered installation and so the test results give an overall measure of effective efficiency which can be regarded as representative of commercial conditions.

    Preliminary Tests

    The following tests were conducted under steady-state conditions, that is, the rate of heat input was pre-set and temperature readings as well as electrical power output readings were made only after the system had stabilized.

    TABLE I

    TEST HEAT INPUT OUTPUT TO 1 OHM TEMPERATURE EFF.

    No. Volts Amps Watts Volts Watts T’ T %

    1 6.64 0.179 1.19 0.125 0.016 33.8 20.0 30

    2 9.16 0.244 2.23 0.280 0.079 40.4 20.6 56

    3 11.22 0.298 3.34 0.450 0.202 47.5 20.8 73

    4 12.60 0.337 4.25 0.520 0.270 53.4 21.0 64

    5 19.00 0.530 10.07 0.720 0.518 62.2 23.0 44

    **************

    The above readings were the first set of readings to be made on the device under proper laboratory test conditions using electronic test circuitry that was designed to operate essentially with power output voltage above 0.3 volts, which is a nominal threshold for effective operation of the germanium diodes used to rectify an A.C. output. For this reason, attention centres on tests No. 3 and 4. Concerning test No. 5, this fell short in measuring true efficiency for reasons to be discussed below, owing to a heat dissipation problem which set in above 55 degrees Centigrade and upset the measurement on the input side.

    Immediately, however, one can verify the design assumptions by considering the ideal 100% of Carnot condition if applied to test No. 3. This would require all the heat energy input at the hot side to convert to electric power given by Nπi, Where N is the number of junctions (300), π is the Peltier coefficient αθ and θ is the temperature of the hot junction in Kelvin (320). With α as 400 microvolts per degree, this gives an input power of (38.4)i watts. Now, i is the junction current and we regard this also as external current, subject to allowance for the series/parallel junction combination and the transformer ratio. In effect, therefore, the 100% of Carnot situation can only occur if the heat power supplied at 320K is precisely such that it is 3.34 watts, which corresponds to a junction current of 87 milliamps. This flows in each of 20 parallel circuits to suggest a total current of 1.74 amps would suffice to carry all the heat input through as electricity.

    This checks with the measured current of 0.450 amps if allowance is made for the 5:1 transformer ratio. In fact, this measure is 2.25 amps compared with 1.74 amps needed for 100% efficiency and this is 77% agreement (cf. the 73% of Carnot efficiency measured). This is very good agreement, also bearing in mind that the 400 µV/K thermoelectric power can be effectively diminished by parasitic current flow owing to the 20 parallel-connected circuits in the device and may need some offset for the partial action of the component added by the Thomson effect. The latter does not contribute to the Peltier heating and cooling at the junction proper, but does drive current as part of the thermoelectric power.

    The measurements of current output in relation to heat input fit remarkably well and confirm the high thermoelectric power, α of 400 µV/K, that had been measured on a test basis as each five-junction part-assembly was built into the device.

    It had been foreseen from theory that about 260 µV/K would be true thermoelectric power and about 170 µV/K could be due to Thomson effect. Therefore, a 60-70% efficiency factor might imply a measured output voltage of the order of 250 µV/K. In test No. 3 the 0.450 volts came from a 26.7 degree differential and 15 junctions in series with a 5:1 transformer ratio. This works out as a junction EMF of 225 µV/K.

    Concerning the drop of efficiency as more heat is fed into the device (test No. 5) it is found that the apparatus begins to lose heat from evaporation as bubbles form around the heater. This loss of heat is such that the apparent performance drops appreciably with increasing temperature. However, this is an artefact of the way in which the heat input is measured by the electrical heating of water. Evaporation on the input side of the device cannot be a fair factor in the test, which is only viable provided bubbles are not formed in the test heat input source or the latent heat carried away by those bubbles is somehow accounted for.

    Load resistance versus internal resistance

    The value of the load resistance, given a variable heat input rate, is an important consideration. The heat input determines the operating temperature and this, in its turn, determines the output EMF. The load resistor and this EMF determine the current output, but for optimum operation it is necessary for this current output to be the full junction current. It is possible for some junction current to be internally diverted by closed loop circulation between the metals forming a bimetallic layer. Such circulation would transfer energy from the hot to the cold junctions without the Carnot component being diverted for use in the external load circuit. However, the test bears out an assumption which emerged in the development of the ‘cold spot’ theory of the device. The expectation from this was that, at least when operating in the Seebeck mode under test, the dynamic current excitation developing the enhanced thermoelectric power would drive the junction current exclusively through the external circuit. On this basis, the only load matching consideration is how the internal resistance of the device relates to the load resistance in contributing to ohmic losses.

    For the 1 ohm load condition of test No. 3 we can interpret the output of 0.450 volts as a measure of 0.090 volts on the input side of the 5:1 transformer. This is the output of 15 junction pairs and, for the temperature differential of 26.7 degrees, it implies that a thermoelectric power of 225 µV/K has reached the load circuit. Bearing in mind that a 400 µV/K thermoelectric power is known to be potentially active and that the 1 ohm load is a 0.04 ohm load on the input side of the transformer (owing to the squared effect of the 5:1 transformer ratio), this suggests that the internal load resistance is 0.03 ohms if there is no loss of potential. A value of 0.02 ohms is calculated from knowledge of the resistance of the commercial material used to build the device (see comment on this which follows) and this is probably the true value. Such resistance applies if virtually all the external current is flowing by snaking action through the thermoelectric junctions with very little internal closed loop flow detracting from that performance. These considerations tend to confirm the design assumptions used.

    The internal ohmic heat loss is then estimated from the measured external current 0.450 amps, which scales to 2.25 amps for the input side of the transformer and this current in 0.02 ohms implies an ohmic heat loss of 0.10 watts.

    This can be reconciled with the 0.202 watt output with an estimated 70% of Carnot efficiency, much as is deduced for test No. 3, especially as some of the ohmic heating at 0.10 watts is available for regeneration of electricity.

    This discussion really aims to assess the scope for increasing efficiency further by future design which reduces internal resistance, it being important to understand the factors affecting performance in the design under test.

    It now remains to reconcile the relatively low efficiency of test No. 1, for example, with that of test No. 3. This is easily explained simply because the germanium diodes used in the bridge rectifier connected to the transformer output absorb energy, becoming good conductors only as the forward voltage across them rises above 0.3 volts. This will present no problem in production thermo-electric devices because many more junctions than 15 will be connected in series and this will result in high performance relative to the Carnot limit, even with the low temperature differentials represented by test No. 1.

    However, in the verifying tests to be reported below, this will be checked in view of the importance of applications working with quite low temperature differentials.

    Calculation of internal resistance

    The efficiency necessarily depends upon the internal resistance of the device. This may be calculated approximately using the fact that the 0.1 ohm per square specification of the bimetallic layer arises from parallel flow through 0.2 ohm per square of nickel and 0.2 ohm per square of aluminium. The device involves series flow through 300 bimetallic layers of width 3 cm and length 0.25 cm, but the current will not follow the longest route. This is somewhat less than 0.03 ohms per layer. The layers were connected 15 in series and 20 in parallel and this implies a total internal resistance somewhat less than 0.75 times 0.03 ohms or, say, 0.02 ohms. This is the value estimated from the measurement data reported above.

    Verification Tests

    These tests were performed by H. Aspden on 17th and 18th October. The first set of tests reported in Table II concentrated on the peak range of efficiency indicated by Table I. It was found that even a small change of heat input rate meant waiting for between 20 and 30 minutes to secure temperature equilibrium. The latter is vital to proper measurement of temperature. The temperature readings are believed to be correct to within 0.05 degrees Centigrade and, as far as can be judged, any error from making measurement at a surface point slightly offset from the actual operative thermal interfaces would mean that the efficiency values obtained are ‘worst case’. It is, therefore, felt that the efficiencies now registered are reliable in indicating what can be achieved in a commercial installation.

    TABLE II

    TEST HEAT INPUT OUTPUT TO 1 OHM TEMPERATURE EFF.

    No. Volts Amps Watts Volts Watts T’ T %

    6 9.50 0.253 2.40 0.300 0.090 41.9 18.6 51

    7 10.00 0.266 2.66 0.340 0.116 42.8 18.8 57

    8 10.50 0.279 2.93 0.375 0.141 45.1 19.3 59

    9 12.00 0.318 3.82 0.490 0.240 49.4 20.2 69

    10 13.00 0.345 4.48 0.545 0.297 54.0 20.6 65

    11 14.00 0.371 5.19 0.565 0.320 55.0 20.9 59

    These tests reported in Tables I and II are characterized by the use of electrically heated water as the thermal input source, as opposed to an electrically heated metal interface. The object was to get more uniformity of temperature and so a precise indication of the true temperature. However, above 55 degrees Centigrade the water loses heat rapidly owing to vaporization and then the measure of heat input rate fails to indicate true efficiency.

    The tests certainly reveal that efficiency of heat to electricity conversion of 70% of the Carnot level is a reasonable expectation with temperature differentials in the 30-40 degree range close to ambient conditions, but this is further supported by the tests in Table IV.

    Based on test No. 10 a check was made of the effect of changing the dynamic excitation frequency. The operating frequency for the data in Tables I and II was 18 kHz. This had been chosen for optimum tuning of the circuits. As might be expected, there was a drop off in efficiency with reduction of frequency. The data given in Table III apply.

    TABLE III

    TEST FREQ. THERMAL INPUT POWER OUTPUT TEMPERATURES EFF.

    No. kHz Watts Volts Watts T’ T %

    10 18 4.48 0.545 0.297 54.0 20.6 65

    11 14 4.48 0.530 0.281 54.0 20.6 61

    12 10 4.48 0.495 0.245 54.0 20.6 53

    ************

    This test does not mean that the frequency has to be of the order of 18 kHz to obtain the highest efficiencies from the dynamic excitation. It is just that the capacitor structure of the particular test device with its transformer inductance and self-inductance has an optimum switching frequency. A problem ahead is to assess the best frequency for dynamic excitation giving the highest thermoelectric EMFs and then design the device so that the capacitance and inductance match this operating frequency.

    The next set of experiments involved a change in the transformer from the one used in the above tests which had a 5:1 ratio to a new one with an 8.5:1 ratio. This required a 25 kHz excitation frequency for best response, owing to the change in inductance on the primary side.

    The object of this change was to explore the loss of output power for very low temperature differentials, which loss resulted from a threshold cut-off in the germanium diode bridge rectifier circuit used to produce smoothed D.C. from the transformer output. The problem faced was due to the A.C. output waveform being of the form shown in Fig. 2. As the signal amplitude increases, more and more of the signal rises above the operating threshold of the diodes and, to get a realistic efficiency measure, substantially all of the signal has to lie above the threshold.

    Fig. 2

    The sole purpose of the following tests in Table IV was to check to be 100% sure that we still have an efficient converter using the temperature differentials of Test No. 1. This test has given 30% of Carnot efficiency with a 13.8 degree differential but the output voltage was below the diode threshold for a significant part of the dynamic excitation cycle. By stepping the voltage output up by the greater transformer ratio, a greater portion of the signal becomes effective in overcoming the bias in the diodes.

    TABLE IV

    TEST HEAT INPUT OUTPUT TO 2 OHM TEMPERATURE EFF.

    No. Volts Amps Watts Volts Watts T’ T %

    13 8.48 0.226 1.92 0.400 0.080 38.3 18.8 66

    14 6.20 0.168 1.04 0.255 0.032 32.8 18.8 67

    *************

    The data clearly show that the comparable results for tests Nos. 1 and 2 suffer from the diode cut-off, that the problem has been easily overcome by output circuit redesign and that efficiencies of 66% plus relative to the Carnot level are to be expected as the operating norm of the Strachan-Aspden converter, even when the temperature differential is only a few degrees.

    Tests using iced water

    It was possible to cool the heat sink base by immersion in a tray of iced water and hold the upper heat surface of the device at ambient temperature. The results (power output for a given temperature differential) were fully in accord with the performance just reported for similar small temperature differentials. The ice test of the first prototype device holds up in that electricity can be produced by melting ice. Such tests, however, do not give a measure of efficiency because the rate at which the ice is melting is difficult to measure. However, the efficiency must be as indicated in the tests of this report, because the device and its circuit only ‘see’ temperatures at the working heat surfaces.

    Conclusions

    The tests reported above are definitive tests on a Strachan-Aspden device using thin film thermoelectric techniques with dynamic excitation and transverse commutation in a capacitative assembly.

    The tests aimed at determining efficiency. The efficiency results were typically 65-70% of Carnot level for differentials of temperature in the ambient range. The power rating measured in heat throughput rate was 2 kilowatts per square meter for a 20 degree temperature differential. The corresponding electric power generation with this very low temperature differential is 80 watts per square meter. However, efficiency, rather than throughput power, was the purpose of these tests and it is important to remember that the working metal involved in the test device is interfacing over only one part in 1000 of the total area of the heat input surface. As we adjust the metal film thickness relative to the dielectric and conceivably eliminate the dielectric, the full design potential can be exploited. It is such that the technology of the device can cope with any practical level of heat input per unit area that available heat sources (or heat transfer materials) can supply at the operating temperatures specified.

    Concerning tests in Peltier mode, meaning input of electricity to cause heat transfer between the heat surfaces, this was not possible with the specific design of excitation control circuitry of the device just tested. The first prototype incorporated a self-tuning circuit which could adjust to give the best dynamic switching rate.

    Such tests will be performed but until they have been performed either on the subject device or other implementations we cannot pronounce on the efficiency for Peltier mode operation. Our feeling is that it will be high, but perhaps not as high as for electrical power generation in Seebeck mode. However, high efficiency is more important for electrical power generation applications and an outlook of 70% of Carnot with more expected from production designs is very good indeed.

    Footnote

    Much of the research effort between February 1989 and July 1989 involved efforts to fully understand the relative roles of the factors which contributed to the working of the first prototype. It was a set back that an attempt at partial reassembly of that device to test efficiency had caused its destruction by internal shorting owing to chemical penetration. However, the new device, built in July-August 1989 period and modified according to the results of that research, now verify the design assumptions and have yielded the efficiency data. Thick-metal-film test converters are now under construction and once the tests on these are complete we will be in a position to project how best to proceed to a product stage.

    Our patent position has been brought into line with these recent findings so that our main international cover will relate directly to design variations centred on the structure incorporated in the test device discussed in this report. Such cover also caters for what is expected to be a successful outcome on the thick-film embodiments.

    H. Aspden: 22nd October 1989

    **********************************************

    The above test report describes the status of a research test at a time when the interest centred on measurement of efficiency. In the diagnostic research phase which followed it was realised that there was a gain in performance that came from the Thomson effect driving current along the thin film by heat action. One did not need to generate electricity to sustain the full measure of current flowing and thereby eat into some of the useful power generated.

    The problem, however, with the capacitative device was that the transverse current carried through the capacitor stack which was powered directly by the Peltier EMF was no doubt a distributed current across the section of the stack. In this case the capacitor implementation must involve joule heating owing to some current flow in the thin section of the metal film, as allowed for in the above analysis.

    However, then the current traversing the junctions in the transverse direction is not concentrated at the edges of the bimetallic layers, as it could be in a modified non-capacitative implementation. The actual efficiency of the capacitor device found under these circumstances is quite remarkable and is at the limit of what is conceivable from Peltier action owing to the temperature profile across the junction interface. Bear in mind that the temperature governing the Peltier action is not exclusively that at the edges. This suggests that there is some other action involved in the device which contributes to enhancing the efficiency.

    Research into this question points to a thermal feedback effect connected either with the Nernst Effect or with free electron diamagnetism, meaning a thermodynamically powered gyromagnetic reaction set up in conduction electrons in metals in opposition to the magnetizing effect arising from the Thomson effect circulating currents in the metal films.

    The updating of this research report will therefore need to examine the theoretical factors involved and the very different design considerations which apply if one makes connection between the bimetallic films by metal conductive edge contact only, without the circuit path being through charge oscillations in a capacitor dielectric. Such a report update also may need to include an examination of the research implications if one designs the converter to over-excite the thermal feedback action, assuming that such an action is really adding to that efficiency. In principle these later developments point to a very much greater performance potential, having regard to the fact that we are exploiting temperature gradients in metal with transverse current excitation and not a power current flow through metal directly between junctions at different temperatures.

    APPENDIX V

    The Stachan-Aspden Invention: Thermodynamic Power Anomaly
    [October 1989 Report]

    D.C. THERMOELECTRIC POWER ANOMALY

    The Strachan-Aspden invention shows that thermoelectric EMFs far greater than are expected from conventional textbook data are effective with A.C. operation. The reason for this needs to be understood in order to give one a measure of confidence in advancing the R & D effort needed to exploit this newly-discovered phenomenon.

    The following scientific paper, which has not been published elsewhere, deals with this question.

    ABSTRACT

    The discrepancy between the theoretical and measured thermoelectric power of bimetallic thermocouples is explained on the assumption that current flow across the junction occurs in filamentary surges which concentrate the heating and cooling effects and so distort the effective temperature differential. The basic theory used conforms with that of more classical treatments, inasmuch as modern theory has adapted to cope with semiconductor materials which exhibit temperature effects quite different from those found in base metals.

    ******

    The theoretical thermodynamic value of the Peltier coefficient is shown by Ehrenberg [1] to be:

    (kT/e) log(n’/n)

    where k is Boltzmann’s constant, T is the junction temperature, e is the electron charge and n’, n are the population densities of the free electrons in the two metals forming the junction.

    The thermoelectric power of an individual junction is the same expression without the T term, being an EMF per degree of temperature.

    In Table III of this Ehrenberg text [1] a tabulation shows the measured values of the thermoelectric power for various base metals referenced on the metal sodium. The data show that the discrepancy between the calculated and measured values is a factor of 15 for Ag and Au, a factor of 21 for Cu, 51 for Al and 6 for Ni. Ehrenberg also deduces theoretical values for the Thomson coefficient, which makes an additional contribution to the thermoelectric effect and is a function of the rate of change of free electron density with temperature. Ehrenberg does not compare theory and experiment in this case.

    In view of the potential benefits of efficient thermocouple devices in refrigeration avoiding the use of polluting CFC chemicals, there is now a pressing need to understand the fundamental reason for this discrepancy. The following investigation is part of an ongoing commercial research study into this problem, which has already revealed techniques by which to close the gap between the calculated and measured thermoelectric power, particularly for an Al-Ni thermocouple.

    Equation (4) is derived on the thermodynamic assumption of a thermal pressure balance as between electrons in both metals. If, as with certain semiconductor thermocouple junctions, there are positive (p) and negative (n) charge carriers in the different conductors, the Peltier coefficient need not depend upon the ratio of carrier densities. If p-n annihilation occurs at one junction and p-n creation at the other, the current-related thermodynamic energy exchange is more consistent with a thermoelectric power corresponding to a Peltier coefficient of 3kT/e. Upon annihilation, for example, two carriers merge, each transferring its individual thermal energy 3kT/2 into electrical power, and so developing a net EMF E related to an energy Ee equated to 3kT.

    For the Al-Ni combination, using equation (4), Ehrenberg [1] assumed a carrier density ratio of 21, which gives a logarithmic factor of 3.04. This implied a thermoelectric power of 265 microvolts per degree centigrade. Since then, however, carrier polarity data for the Hall effect, as revised, suggests that the Al-Ni thermocouple may have a thermoelectric power related to the p-n condition, which coincidentally gives virtually the same value. Thus, the very substantial discrepancies between observation and theory noted by Ehrenberg still apply, even for this Al-Ni metal combination.

    It is possible that, though a predominant free electron population exists in a metal conductor, the electrical conduction properties are not, at every instant, related to the shared action of all the electrons. Imagine, for example, that the charges carrying current tend to concentrate their ordered motion collectively into a transiently relocating filamentary in-line flow through the conductor. This filament, which may comprise short and transiently discontinuous current elements, corresponding to charge concentrations, breaks up to be replaced by another such filament elsewhere so that, on average over a period of time, the flow appears uniformly distributed across the section of the conductor. In a sense, this physical picture is easy to justify because the electrons following at speed in the same direction along a common line, one behind the other, are less likely to be scattered by collisions.

    Of course, such speculation has little value unless supported by tangible evidence. Force-free vortex filaments which appear on a nanosecond time scale feature in plasma research [2] and have led to analysis of the density and velocity distribution profiles of electrons and positrons in filaments [3]. However, so far as solid conductors are concerned, this filamentary action is not something that can easily be established. It may emerge from research into the properties of ‘warm’ superconductors or from research on the thermoelectric anomalies under discussion.

    Firstly, with the plasma aspect in mind, it is known that the arc discharge in mercury arc rectifiers develops discrete cathode spots on the surface of the mercury pool. This means that the current divides into separate flows. These spots meander around but there is some mechanism by which the discharge breaks into discrete filaments of the order of 15-20 A in strength, as if this represents some critical current factor defining a single current filament.

    Secondly, extensive researches by Hildebrandt [4,5] have shown that current as high as 30-40 A will divide between two separate anode-cathode discharge paths, with anti-phase modulation at a period of 15 ns, and that this effect is not caused by resonant circuit properties but is an inherent property of the conductive medium. Thus, in a plasma at least, this is consistent with a preferred filamentary current state in which the carrier flow is involved in what may be termed an ‘inverse avalanche effect’ as the conduction action concentrates into fewer carriers in a filament with a 15-20 A critical maximum current for continuous in-line flow.

    It is now noted, without particular elaboration, that if a train of electrons form in line at equal spacing and move together to convey current along that line, then, if each one steps forward to the position of the electron ahead at the Compton electron frequency, the current carried is 19.79 A. This is simply ec/λc, where e is electron charge in coulombs, c is the speed of light and λc is the Compton wavelength.

    This is such a basic physical quantity that we must indeed be very attentive to any scientific phenomenon which happens to point to a 20 A current threshold. It suggests a limiting value for the amount of current which can flow in a single filament. It suggests that current may be conveyed even through metal conductors in a burst mode in which it involves short filamentary current elements having a 20 A intensity over lengths reduced as necessary in proportion to the average current flowing through the metal.

    More important, however, is the fact that such a current with electrons really in line at spacings as close as their classical diameters would imply a velocity of electron motion in the current direction of the order of the Fermi velocity of an electron gas. We assume this is possible, notwithstanding the classical Coulomb repulsion effects, embracing to some extent the idea that what is involved is electron displacement from electrically-neutral sites, as if electrons alternate with positive holes or as if electrons and positrons moving in opposite direction somehow carry the current. This proposition then suggests that a Fermi velocity, which is not a function of temperature, in some way powers the action. For a given metal this means that the electron speed along a filament is constant and that filaments of lower current strength than 20 A either comprise electrons or holes at proportionally greater spacing or what are, effectively, short discontinuous filamentary components. Possibly, filamentary vortex loops of circuital current may form, occasionally opening up to carry current forward through the conductor before reforming as closed loops.

    Conceivably, therefore, even in a metal containing a high free electron density, the current flow might, at any instant, be carried by but a few of these electrons and even, given a relatively few mobile carriers, allow the positive ‘holes’ to make a current contribution by favouring a flow route which causes some ordering and displacement of the holes to set up current filaments nucleated by positive charge carriers.

    Now consider such a current filament as traversing a bimetallic junction interface in a thermocouple. The Peltier heating or cooling will be concentrated in an extremely small spot defined by the zone taken up by the filament. Thus the temperature of that spot, which determines the Peltier coefficient cannot be the mean temperature we measure for the junction interface as a whole. Depending upon the relaxation time needed to cause the filament to relocate, the effective temperature active in determining thermoelectric power can be very different from that assumed.

    A concentrated cooling effect at a spot in a junction interface must increase the electrical conductivity in the region of the spot and this alone could develop a crossing point of least resistance, which would tend to keep the current trapped in that position. An exception to this can be expected in certain semiconductors and alloys over temperature ranges for which resistivity may decrease with increase in temperature. Indeed, such materials tend to be those used in advanced thermocouple research, which itself implies that here lies the weakness of normal metals from the viewpoint of their application to thermocouples.

    The Peltier coefficient is measured by supplying a controlled amount of heat to a junction cooled by the Peltier effect, based on a technique developed by Calendar [6]. For Peltier cooling the governing equation is easily formulated as:

    δθ/δx = (αθ)i/4πKx2

    where K is the heat conductivity (assumed the same for both metals). α is the thermoelectric power (volts/oC), θ’ is the absolute temperature and i is the mean current.

    When solved this gives:

    θ ‘ θo & αθ)i/4πKx

    The minus sign would be replaced by a plus sign if the current direction corresponded to Peltier heating.

    We define a mean least value of x as xo and, for ease of rough calculation, estimate this as the distance from the centre to the side of a square area of a cross section of filament. Thus, assuming N electrons per unit length of filament with n as the electron density:

    n ‘ N/(2xo)2

    We further equate the energy of self inductance of the filament with the kinetic energy of the electrons, so that:

    1/2 Li ‘ (N)(1/2 mv2)

    where L is the standard calculable inductance 0.5×10-7 henries per metre, m is electron mass 9.1×10-31 kg and v is electron speed. From (7) and (8):

    i/xo ‘ v (2nm/L)

    The temperature difference between the mean junction temperature θo and the temperature θ’ is then αθ’i/4πKxo and, putting this in (5) gives:

    δθ/δx ‘ (xo/x2)(θo&theta))

    The actual temperature effective at the junction, and the mean junction temperature, change and so scale in proportion. Indeed, from (6):

    θo/θ) ‘ 1 % α(v/4πK) 2nm/L

    This means that this expression represents the factor by which the measured thermoelectric power or Peltier coefficient will underestimate the true value which really governs the thermodynamic action.

    It is believed that v is independent of temperature, as already stated, and that it is also independent of current strength, inasmuch as N is the variable corresponding to effective current. We may use Fermi-Dirac statistics to estimate v, but the result is much the same if we appeal intuitively to the threshold current condition I = ec/λc and estimate v as given by equation (8) when N is 2.66 1014 per metre. This corresponds to a line of electrons spaced by their classical diameter, as calculated using the formula of J. J. Thomson, a saturation condition that is relevant because the diameter was calculated by J. J. Thomson by equating kinetic energy with electromagnetic energy in the magnetic field.

    It is found from this that v is 284 km/s. To estimate the factor (6) insert typical values for copper, eg. n = 1.3 1029/m3 and K = 400 watts-m/oC to find that the factor becomes:

    1 % 0.12α

    if α is expressed in microvolts per degree C.

    Writing now the measured thermoelectric power as ε, we know that the factor just deduced is α/ε, so that if α is 144, as calculated by Ehrenberg for copper at 17o C, the measured value of ε which we ‘think’ is a measured value of α does, from (13), work out at 7.9. Somewhat similar results apply to Ag and Au, which have smaller n value and so a similar theoretical α value, but much the same K value. Note that equation (13) based on n being 60% that of copper and α being 100, say, gives ε as 9.7.

    Thus, even for aluminium, for which the thermoelectric power discrepancy between textbook theory and experiment is a factor of 51, we see that the interpretation provided here reduces the discrepancy to a point where theory and experiment are virtually in full accord.

    It is submitted that the filamentary current proposition discussed is highly relevant to thermoelectric action. As intimated above, commercial research aimed at reducing and, indeed, virtually eliminating the discrepancy in practical thermocouple circuits is proving successful. The secret is to use a.c. to prevent cold spots from forming and so choking off the thermoelectric power, this being a d.c. current symptom peculiar to metal thermocouples as opposed to semiconductors.

    REFERENCES

    [1] W. Ehrenberg, ‘Electric Conduction in Semiconductors and Metals’ (Clarendon Press, Oxford), pp. 21-23 (1956).

    [2] D. R. Wells, IEEE Trans. Plasma Science, 17, 270 (1989).

    [3] V. Nardi, Phys. Rev. Lett., 25, 718 (1970).

    [4] J. Hildebrandt, Physics Letters, 95A, 365 (1983).

    [5] J. Hildebrandt, J. Phys. D: Appl. Phys., 16, 1023 (1983).

    [6] H. L. Callendar, Proc. Phys. Soc. Lond., 23, 1 (1910).

    APPENDIX VI

    Thermoelectric Experimental Device Construction

    The following is a copy of a text written by John Scott Strachan dated February 9, 1994 transmitted to U.S. researchers and project engineers as briefing material for non-confidential discussions held in Edinburgh, Scotland later that month.

    It contains details concerning Strachan’s fabrication of the original test device of which this author had no prior knowledge and it is evident from this information that there is no easy and immediate route to developing this technology using the methods adopted by Strachan. This will therefore explain why Strachan has switched his attentions to other projects, leaving this author to pursue this thermoelectric research along lines closer to his own original perceptions of the invention which avoid use of PVDF substrate film.

    Strachan’s account dated February 9, 1994:

    The original device discovery happened accidentally and was the result of the construction of an ultrasonic lithotriptor. At the time Dr. Aspden and I were discussing the concepts of thermoelectricity and were trying to conceive methods of reducing the thermal wastage in such devices. I had constructed a few experimental samples but with little success. At the same time I was working on an idea for a sonic ‘laser’, a device to progressively amplify a travelling wavefront in a transducer with a view to creating a high intensity ultrasonic pulse from a low acoustic impedance.

    The goal was to produce an intense compression pulse from a low acoustic impedance source for the delivery of a focused shatter pulse in kidney stones. The resultant ‘sonic laser’ units were to be placed in an array which would allow phase steering of the wavefront and the changing intensity in three dimensions to produce a versatile triptic pattern. This would allow the destruction of stones down to 1 mm in size with very little heating of the surrounding tissue. The further advantage of the low impedance of the source would be the ability of the array to ‘listen’ to the shattering of the stones and intelligently follow the crack growth with the peak intensity of the wave. Had the project been successful it would have reduced the treatment time for gall and kidney stones by a factor of ten or more and the lithotriptor itself would have had a market value of more than $100,000.

    The device consisted of several stacks of high k PVF2 in a column, with an electronic circuit set to trigger a compressive pulse in phase with a pulse travelling through the stack, in order to synchronise the circuit and cope with the variations in acoustic impedance of the adhesives. I interleaved the PVDF layers with layers of recording tape. Thus, as the compressive wave passed through the stack, the motion of the recording tape could be detected in the next layer as a fluctuating voltage. As such, it could be used to trigger the next pulse in perfect phase, since the speed of the electromagnetic signal allowed advanced warning to the trigger circuit of the approaching acoustic wave.

    It was a really neat idea and I was very proud of it!

    The device worked well for brief instants but kept blowing the drive circuit. This seemed to occur when the stack was touched on one side. Since I had been thinking about thermoelectric devices and the stack resembled vaguely some of the ideas I had of trying to create a capacitatively-coupled thermopile (later it was proved that such a thing is inherently impossible)*, I wondered if there might be a thermoelectric explanation for the stack’s strange behaviour.

    The construction of the stack was as follows.

    Materials:

    1. 28 µM PVF2, (D33 = 27, k = 18) having bimetallic coating of Ni and Al (Ni = 2200 angstrom, Al = 800 angstrom) and a resistivity less than 0.1 ohm per square.
    2. BASF metal recording tape poled manually in line with the long axis.
    3. ZAP ethyl cyanoacrylate adhesive (formula unknown).
    4. One strip 2.5 mm x 2.5 mm x thickness resonance 2MHz PZT 5a lead zirconate ceramic with silver electrodes.
    5. 10 layers of super-hard acrylic machined to a thickness such that the acoustic delay is equal to a half wavelength at the resonant frequency of the ceramic strip. A suitable material is available from Aerotech Laboratories in California.

    Unfortunately, I do not have a detailed specification on this as the material I used was part of a free sample sent to Dick Ferren of Pennwalt Corporation.

    The sound velocity in PVF2 is 2.2 mm per µs.

    The BASF tape and the PVF2 were then treated with a 2% solution of tetra butyl titanate in petroleum ether to improve bonding. This process must be carried out in an arid atmosphere and then the surfaces should be exposed to a humidity of 100% or greater at a temperature of 40oC. The process is extremely tricky since, if moisture is present before the evaporation of the petroleum ether, the titanium will not bind through the metal layer on to the PVF2 or mylar. This can be diagnosed by the white powdery appearance of the surface. If successful the surface will exhibit a slight iridescence.

    Once the petroleum ether evaporates and the iridescence is present the exposure to humid atmosphere takes place. This will sometimes produce a slight trace of the powdery surface but this may be washed off in petroleum ether or toluene. DO NOT USE ISOPROPYL ALCOHOL!!!

    Cyanoacrylate will not polymerise in the presence of protons, i.e. at any pH below 7 the surface of PVDF will release free protons in the presence of isopropyl alcohol and thus prevent secure bonding. The titanate layer helps to maintain a surface pH above 7 in a moderately dry atmosphere but can not fight the catalysis of the propyl groups in the alcohol.

    Fig. 1 Layered composition of laminate formed

    The greater the care taken at this stage, the more chance of success later. Every single strip should be examined before lamination for any signs of wear on the surface or any trace of white titanate. Failure to do this will virtually guarantee delamination the instant any voltage is applied. This process is time consuming and the several thousand strips will take several weeks to laminate, even working ten to twelve hours a day. But skimping the preparation means that there is no chance of creating any percentage of intact stacks and the entire effort will be entirely wasted. The lamination jig surfaces should be positively charged PTFE. The layers may be added one by one for a period of time equal to one quarter of the anaerobic cure time of the ethyl cyanocrylate. Then a press is applied at a pressure of between 1 and 6 tonnes whilst an ammonia atmosphere is blown past the stack to catalyse curing. Then the process is continued. Time is the main enemy. Since each layer must be examined and the quarter cure time is typically 30 seconds, this is a very intensely stressful process. I managed to complete only one stack on the first day and had scrapped nearly a thousand layers in the process. Practice improved the situation.

    The PVF2 and the BASF tape were laminated together layer by layer to reach a thickness of 0.55 mm, i.e. half λ at 2MHz. This process was repeated until a large number of stacks were produced. Next a 5,000 volt supply was connected across each stack and those that vaporised were discarded. A suitable breathing apparatus should be worn during this process since the fluorine gas emitted as the PVDF breaks down is highly poisonous. It is also corrosive and so the entire process should be carried out at a suitable location and well away from glass, since the hydrofluoric acid will cloud the glass, making you unpopular with your colleagues! The percentage of stacks that break down depends on the defect density of the original PVF2. That percentage depends on whether a gel colloid or suspension process was used during polymerisation. The use of gel tends to leave micro bubbles of gel in the PVF2, reducing the breakdown voltage.

    The surface chemistry of a poled polymer is a constant problem since the creation of compound acetates with various metals can occur with very little encouragement. The passing of a current through the cyanocrylate often starts a cascade catalysis which, once started is unstoppable. This is worst with copper where even a few seconds of current will produce a sufficient ‘seed’ to result in the total acetisation of the metal within a month or so. With nickel the process is less easily turned on since a sulphate must exist before the process starts. The initial test voltage does not usually initiate a corrosion and so the elements may be stored anaerobically and aridly for an indefinite period. Once the elements are subjected to operational voltages or are even accidentally squeezed, which produces enormous voltages in local areas, a gradual decay of metal begins. This will begin in spots surrounding any non-polymerised cyanocrylate. Such spots exist since, even with all the precautions described, certain free H+ ions will be present preventing polymerisation. This is why such care MUST be taken. The metal layers can disappear in just a few hours if the defect density in the bonding layers exceeds 2 per cm2. The reduction in decay time is exponentially proportional to defect density.

    The remaining stacks were now measured for electrical conductivity and those that showed a resistance of greater than 0.001 ohm from side to side were discarded. The apparatus for measurement of the resistance is designed to cancel the apparatus resistance. The electrodes of the apparatus were a pair of steel slip gauges. This is needed in the ultrasonic device to prevent the waveform from distorting. In the thermoelectric application this stage-by-stage testing is even more critical since it defines both the electrical and thermal conductivity of the stack.*

    Those elements discarded for resistance reasons were reground on the edges with a fine diamond wheel in liquid nitrogen to improve flatness and were set aside for an attempt at a slightly thinner stack. (As it happened these discards were lost and only found again at the end of last year [1993] when they were used to construct the third thermoelectric demonstration device.)

    The original batch was divided into several sets of 50 elements.

    Each element was coated with Emmerson and Cumming silver loaded epoxy and bonded to a thin copper or silver strip, top and bottom. Silver is preferable to prevent the production of copper acetate from the cyanoacrylate but I did not have a large quantity of this and by this time was pretty impatient to see if the device would produce the high power ultrasonic pulse I hoped.

    Each element was then laminated to a layer of hard acrylic half λ thick as shown in Fig. 2 below.

    Fig. 2 Composition of bonded element

    These elements were then assembled as shown in Fig. 3, with the ceramic driver at one end.

    Each element was then connected by its electrodes to a drive circuit. The ceramic transducer bonded to the end of the stack was connected to be pulsed by a conventional driver. As the wave passed through the stack an electromagnetic signal from the moving magnets triggered the pulses through the stack in a cascade. By adjusting the threshold of the trigger circuit, the frequency could be tuned to match the oncoming wave. Thus, even though the delay through the stack was inconsistent due to the variation in the bonding thickness, the cascade of pulses could always be kept in phase with the advance of the compression wave. A straightforward sequential delay could not do this, which was why other attempts at ‘sonic lasers’ had failed to produce the expected amplification.

    Everything worked fine except that as soon as the stack was moved, almost as soon as it was touched, the drive circuit would blow. This was surprising since this was no wimpy drive and had the capacity to deliver more than a joule per pulse. But closer examination revealed that the circuit was not blowing in the ‘ON’ cycle but in the ‘OFF’ cycle.

    A sector of the stack was connected across an oscilloscope and the waveform in Fig. 4 was observed when a thermal gradient was across the stack while only noise was visible in the absence of the gradient.*

    At first I naturally assumed that this pulse was a high impedance phenomenon, but I had to wait for a couple of days to investigate since it had blown the oscilloscope.

    Fig. 4. Spike voltage waveform produced by thermal gradient

    A charge amplifier arrangement with a virtual dead short was now attached to the sector of the stack and the waveform had the shape shown in Fig. 5. Note that both of these measurements are of a sector of the stack not connected to the drive circuit.

    Fig. 5. Thermally developed spike voltage with circuit protection

    This was very surprising. Clearly the spikes carried a lot of current and in fact even the impedance of the charge amplifier was too high to discharge the spike before it was driven off. As lower and lower impedances were tried it was eventually possible to discharge the spike in the 200 ns of its duration and get a measure of the number of joules involved.

    The amount of energy in each pulse is difficult to explain since the capacitance of the stack as measured by a bridge was far too low to account for the energy magnitude of the pulse. The combination of pyroelectric behaviour and thermoelectric behaviour seems to combine with either a sudden increase in the effective capacitance or perhaps a brief conductive phase through the PVF2. The resulting stack was connected to an input circuit and to an output path via a transformer and then through a rectifier circuit. The rectifier circuit should use very low voltage drop diodes to reduce voltage drop losses.

    The rest of the story is well known* but a few points are worth making. The first and third prototype devices produced a reversible effect, i.e. the provision of high energy electrical pulses to the stack resulted in the appearance of a dramatic temperature differential across the stack. The second device, built without the magnetic interface strips, did not do this and also was incapable of self-driving through an SCR. The electrical efficiency was measured accurately in terms of the transfer of heat and the electrical output of the device but the amount of breakthrough from the external drive circuit was ignored. Were the measurements valid?** As I recall several results were surprising but were explained away by some fancy footwork from Dr. Aspden. The third device did indeed produce a reasonably impressive thermoelectric efficiency as a generator but detailed analysis of the measurements of the device as a heat pump show that its performance is nowhere near as efficient as would be expected. While this is explainable to some extent from the predicted behaviour of the protection circuitry, the fact remains that as a heat pump the device performs no better and perhaps worse than several commercially available heat pumps. What if the discrepancy between the thermoelectric generator effect and the heat pump effect is the result of a transient electrochemical effect? The chemical interaction of cyanoacrylate and metal is already known to be charge sensitive and is very temperature sensitive. This is a major nightmare for me. What if, in fact, all we have is an endothermic electrochemical reaction? Several gels exist that freeze when subjected to an electrical current. And a lot of those are acetates! The electrical generation effect is even more common.

    The current device is now inert but it is likely that not all elements will have decayed. I am now dismantling the device and will attempt to recover as many elements as possible. I would propose the best use that could be made of these is to distribute them to various laboratories that propose to attempt to construct a device.

    [End of Strachan’s February 2, 1994 Communication]

    *******************************************

    Concluding Comment

    It has become clear, and especially in the light of the above-stated position taken by Strachan, that ongoing experimental research on the phenomenon underlying the Strachan-Aspden invention will need to be undertaken by Strachan’s coinventor, myself, as author of this Report, in following my own different convictions concerning base metal properties when activated thermoelectrically using a.c. However, I can but hope that research interests of those having the appropriate academic or corporate affiliations who come to read this Report will see the merit in the Nernst Effect interpretation of the transverse a.c. action, as described in the initial commentary of this Energy science Report No. 2, and will undertake their own investigations in pursuit of this new technology. The outcome of my own efforts will be reported in Energy Science Report No. 3.

  • Www Energyscience Org Uk Reports Es8 Esr8

    ENERGY SCIENCE REPORT NO. 8

    POWER FROM SPACE: THE CORREA INVENTION

    © HAROLD ASPDEN, 1996

    Sabberton Publications
    P.O. Box 35, Southampton SO16 7RB, England

    ISBN 0 85056 016 0

    Contents

    Preliminary Remarks
    The Correa Project
    The Root of the Problem
    The Dilemma Confronted
    Operational Characteristics of Correa Discharge Device
    Performance Data of the Correa Discharge Device
    The Vapour Reaction Hypothesis
    The Author’s 1977 Plasma Discharge Device
    Spence’s 1986 Energy Conversion System
    Chernetskii Vacuum Energy Breakthrough
    A Concluding Note

    APPENDICES:
    I: Why the Earth is not a Self-Excited Dynamo
    II: The Thunderball – An Electrostatic Phenomenon
    Listing of Published Work of Dr. Harold Aspden

    *****

    POWER FROM SPACE: THE CORREA INVENTION

    Introduction

    This Energy Science Report is one of a series concerned with new energy technology
    and the fundamental energy science that is involved. It is devoted exclusively to the
    research findings of Dr. Paulo Correa and Mrs. Alexandra Correa of Concord, Ontario,
    Canada and seeks to explain the fundamental physics underlying their remarkable
    experimental discovery.

    The Correa technology pioneers one of the four routes now opening up and promising
    to give us access to a plentiful and abundant source of what is coming to be termed `free
    energy’. These all can contribute in their various ways to an energy future free from
    pollution, but all, at this time, trespass on forbidden territory, as judged by orthodox
    physicists and so are not attracting mainstream scientific interest. This leaves the field open
    for exploration and exploitation by the few who do have the needed technical competence,
    the inspiration and an independence of spirit.

    The four avenues can be classified as (1) cold fusion, (2) ferromagnetism (3) vacuum
    spin and (4) electrodynamics. Each involves a mysterious input source of excess energy and
    each is destined to impact the world of technology in the near future.

    It is debatable at this time whether events will confirm true nuclear ‘cold fusion’ as the
    source of heat in the well publicized pioneer work of Fleischmann and Pons. It may in fact
    be another manifestation of the ‘vacuum spin’ phenomenon, by creating in an aqueous
    electrolyte, or even in the cathode itself, conditions somewhat analogous to those prevailing
    in the Correa apparatus. Indeed, there seems to be no doubt that the Correa technology
    itself bridges two of the above ‘excess energy’ categories, electrodynamics and vacuum
    spin. The Correa method is probably the most advanced of these emerging new energy
    technologies, being fully reproducible, well researched with its test findings well
    documented in and protected by granted U.S. patents. It already allows us to tap energy
    from space itself, or rather the vacuum field activity that fills space, and in contrast with the
    alternative methods it offers what may prove to be a mobile light-weight power source
    compared with the heavy apparatus needed where magnets and rotating machinery are
    involved. Unlike ‘cold fusion’ which generates low grade heat output, the Correa
    technology generates electricity at power voltage levels.

    The physics involved in understanding the source of energy in the Correa discharge
    tube is as basic as that required to understand the energy source which sets up the force of
    gravity. Both are seated in an electrodynamic action involving, in the main, heavy ions and,
    indeed, the electrodynamics of the interactions between heavy ions are not well understood
    by scientists. This is why they have failed to solve the mysteries of gravitation and the
    problem of field unification and why they have missed seeing the way forward to the new
    energy technology which we are about to discuss in this Report.

    Below we will come to describe the operating principles of the Correa invention and
    the action by which energy is extracted from the aether. The reader who is impatient and
    curious to learn some details about the technology may wish to jump ahead to read the
    section between pp. 6 and 8 and then from p. 17 before coming back to read what
    immediately follows. The sceptical scientist who does not expect to believe what is
    evidently being claimed will be served best by following the discourse as it now develops.

    Preliminary Remarks

    Having just stated that “the electrodynamics of the interactions between heavy ions are
    not well understood by scientists” I see it as important to justify this statement before I
    venture to criticize other aspects of basic physical theory relevant to the new energy field.
    I will simply quote a few passages from the published specification of British patent
    application GB 2,002,953 which I, as inventor, applied for in 1978. The title of the patent
    application was ‘Ion accelerators and energy transfer processes’. Textbook doctrine on the
    subject has not progressed since that time.

    “Electrical engineering has developed using the simplest formula (for electrodynamic
    interaction between electric charges in motion) and few today would concede that
    there is any question about the universal validity of this formula, the so-called Lorentz
    formula. More informed teachers of electrical engineering have kept the problem in
    mind and express caution. Professor E. B. Moullin, who was President of the
    Institution of Electrical Engineers and Professor of Electrical Engineering at
    Cambridge University at a time when the applicant was engaged on Ph.D. research in
    electromagnetism (1950-1953), wrote in the 1955 edition of his ‘Principles of
    Electromagnetism’:

    ‘It is useless to speculate about the effects of electricity moving in a particular piece
    of circuit until we have discovered further laws of electromagnetism’

    This appears at page 26 of this Oxford University Press publication.”

    “In a book by A. Von Engel entitled ‘Ionized Gases’, 1965 Edition also by Oxford
    University Press, there is the statement at page 285:

    ‘There is no final answer to the question of whether the primary electrons find in the
    plasma an artifice which without extracting too much energy is able to transform the
    more or less uniform electron energy into an energy distribution which is needed to
    satisfy ion production in the gas. In fact it has been suggested, as a result of certain
    probe measurements, that there is a strong positive space charge accumulated in
    front of the cathode, so intense that the space potential is considerably higher than
    the discharge voltage and at least higher than the lowest excitation potential
    . How
    this space charge develops and how electrons have random energies sufficient to
    overcome the retardation in the negative field between the space charge and the
    anode is still an open question.’

    Earlier on page 273 he wrote:

    ‘One of the most puzzling problems of the arc discharge is the functioning of the
    cathode of the cold arc. Cathodes of Cu, Ag, liquid Hg, and many other metals are
    examples of this type. It can be stated from the very outset that no final solution of
    this problem has yet been found
    .’

    Berneryd et al (Direct Current, vol. 6, 1961, pp. 81-85) studied instabilities of
    discharges and found positive ions to have energies very much higher than suggested
    by theory. Benford et al (New Scientist, vol. 56, 1972, pp. 514-516), writing about
    electron beams in relation to fusion, declared that a 1.3 MeV electron system
    accelerated gas ions to energies as high as 20 MeV. They said that the origin of the
    fields was a subject of speculation
    . Stock (Journal of Physics D, vol. 6, 1973, p. 988)
    found that ionization current calculated from electron energies were up to one
    thousand times smaller than those observed
    .”

    Now, in quoting the above I have added the underlining to the marked passages to
    emphasize my point that the scientific `experts’ in the field do not understand the reason for
    these energy anomalies produced by electrical discharges through ionized gas. The scenario
    giving these problems is one where the discharge involves a predominant presence of heavy
    ions rather than the mere arc discharge of electrons freed as by thermionic emission. It
    applies to current in what are termed ‘cold cathode’ discharge tubes.

    The passages quoted above should be kept in mind when reading about the
    technological breakthrough disclosed by the Correa inventions. The phenomenon involved
    has been turned to account by generating electrical power output far in excess of the input
    power used and so it is no longer a question of scientists declaring, as they do regularly,
    that they understand so much about the laws of thermodynamics that they can deny this
    possibility without even considering the evidence. They do not understand their own
    experimental findings of clear record in this particular technical field and so are in no
    position to say that excess power generation is impossible by virtue of a ‘law’ prescribed
    by past ‘authority’ in ignorance of the experimental facts just quoted.

    We have here to confront the reality of this situation, namely that energy from a
    mystery source can be harnessed technologically, and this Report aims to explain this as
    well as pointing to the source of that energy and showing where accepted physics stands
    in need of correction.

    To appreciate in full measure of what this Report is about it is recommended that the
    reader should procure copies of the three U.S. Patents granted to Dr. Paulo N. Correa and
    Mrs. Alexandra N. Correa: U.S. Patent No. 5,416,391 (issued May 16, 1995), U.S. Patent
    No. 5,449,989 (issued September 12, 1995) and U.S. Patent No. 5,502,354 (issued March
    26, 1996). The disclosure in the specifications contains experimental facts presented in a
    form which amounts to an academic dissertation or degree thesis and, as I see it, the
    disclosure in these patents cannot now be ignored owing to their clear showing that we
    already have access to the hidden energy source which one can presume powered the
    creation of the universe.

    I can also interject here a note added since the main body of this text was written to
    advise that a full description of the Correa project together with a copy of the specification
    of U.S. Patent 5,416,391 has been published in the Vol. 2, No. 7, 1996 issue of Infinitie
    Energy (ISSN 1081-6372), Editor-in-Chief and Publisher Eugene F. Mallove Sc.D. and that
    publication warrants the fullest attention.

    Still as part of these preliminary remarks I further draw attention to the fact that my
    paper: ‘The Law of Electrodynamics’, appeared 27 years ago in the Journal of the Franklin
    Institute, 287, 171-183 (1969). It explained how one could justify, by simple dynamic
    analysis based on empirical data, the fact that in an electrical discharge through heavy ions
    there is an axial electrodynamic force acting on the ions that is (M/m)i2, where M/m is the
    ratio of ion mass to electron mass and i is the current carried by the heavy ions.

    I stated that many authors had found anomalous cathode reaction forces in discharge
    studies and quoted E. Kobel, Physical Review, 36, p.1636 (1930) as measuring that
    anomalous cathode reaction force and showing that it was proportional to the square of
    current and far greater than any value one could compute from a pinch pressure in the
    discharge filament.

    Later, in 1977, my paper: ‘Electrodynamic Anomalies in Arc Discharge Phenomena’,
    appeared in IEEE Transactions of Plasma Science, PS-5, 159-163 (1977). Here I had in
    mind the subject of my patent application as referenced above. See the quoted text on its
    p. 161 and the last five lines on p. 163, where the action was deemed to accelerate ions into
    the cathode as a means for generating heat. I had by then become aware of the possibility
    that we could tap vacuum field energy and generate heat anomalously by harnessing the
    electrodynamic forces set up in an axial discharge involving heavy ions. However, my
    circumstances did not allow me to take the proposition forward experimentally. The
    invention, the subject of that patent, was aimed at tapping the zero-point field energy to
    produce ‘excess energy’ heat by the electrodynamic ion discharge action which sustains a
    positive space charge adjacent the cathode. In contrast, as we shall see below, the Correa
    invention is able to produce electrical power directly by discharging that positive charge in
    pulses drawn through a secondary output circuit. The energy source in both cases is the
    same, as is the principle for setting up the positively ionized plasma and holding it
    transiently stable.

    By 1985 a new kind of discharge anomaly had been reported as a result of passing very
    high current through water. I showed a simple derivation of my version of the law of
    electrodynamics and commented on this anomaly in my paper: ‘A New Perspective on the
    Law of Electrodynamics’, Physics Letters, 111A, 22-24 (1985). This referred to the
    incomprehensible enormous explosive effects found from pulsed ion discharges in pure
    water and pointed again to the reason advocated earlier, namely that scaling factor of m/m.

    Separately in my paper: ‘Anomalous Electrodynamic Explosions in Liquids’, IEEE
    Transactions on Plasma Science, PS-14, 282-285 (1986), I presented a more detailed
    analysis of the incredibly high speed at which ions are driven into an electrode, in defiance
    of known physics. In the Correa invention to be described there is a slowing down of these
    fast ions by causing them to transfer energy into the build-up of electric charge in the
    abnormal glow discharge in front of the cathode, which energy can be drawn off as output
    electrical power, rather than as heat.

    To complete this preliminary account I refer also to my paper: ‘The Thunderball – An
    Electrostatic Phenomenon’, presented at the ‘Electrostatics 1983’ conference held at Oxford
    University, and documented in Inst. Phys. Conf. Series No. 66, at pp. 179-184.

    As can be seen from the data presented in the third Correa patent referenced above, the
    operation of the Correa discharge tubes at low pulse frequency indicates that energy in
    excess of 1,000 joules can be stored in the plasma of each discharge pulse. This implies an
    enormous capacitance and voltage gradients that should be far in excess of those actually
    prevailing. Indeed, for such energy to be contained as electric charge energy in a plasma
    confined within the Correa tube one would expect voltage gradients expressed in billions
    of V/m, unless some compensating reaction suppresses that field.

    This energy of 1,000 J in a volume of plasma of the cubic cm. order is an energy
    density of some 109 J/m3, which is of the same order as that known to exist in thunderballs
    produced by lightning discharges. The Correa invention therefore, in a sense, mimics the
    action of lightning discharges in compacting energy into plasma balls which we see as the
    thunderball anomalies of atmospheric electricity.

    The subject paper, which will be reproduced later in this Report as Appendix II,
    explained how radial electric displacement, as opposed to the transverse displacement we
    know from Clerk Maxwell’s theory, can induce `vacuum spin’ or `aether rotation’ which
    permits such energy densities to be stored in an electrically quasi-stable manner at low
    voltage gradients.

    The Correa technology, it will be suggested, does therefore rely on `vacuum spin’ for
    its storage function, whilst setting up the positive plasma in the discharge tube by electro-dynamic confinement in an axial sense, as opposed to the electromagnetic `pinch’ sense that
    features in fusion reactor research. However, though it succeeds in sustaining confinement
    for the pulse period, the Correa device it is not powered by a fusion process. Indeed, since
    this author presented the subject paper at the conference at Oxford University he has
    become aware of independent research in three countries on electromagnetic machines
    which overheat owing to the low voltages and very high current involved, but which
    nevertheless draw energy anomalously from the `aether’ by setting up radial electric fields
    in a conductive disc spinning in a magnetic field. The Correa technology taps this same
    `vacuum spin’ source of energy and the subject paper published by the Institute of Physics
    in U.K. points to the aether phenomenon involved.

    It is of interest also to mention that geophysicists and cosmologists have not been able
    to explain the magnetism of the Earth or the Sun in terms of unipolar charge rotating with
    that body, even though a connection was recognized which gave basis to the Schuster-Wilson hypothesis. This was not just because they discovered that the magnetic field
    reverses periodically but because the charge needed would develop those same electric field
    gradients of billions of V/m that are somehow avoided in the Correa tube. This really is an
    interesting subject of research, all connected with the evident fact that charge displacement
    in the aether cancels that electric field but does not cancel the magnetic field! I therefore
    see the Correa research as having important implications for the interpretation of several
    phenomena in cosmology. See also Appendix I, where I explain why the alleged self-excited dynamo theory for the geomagnetic field is quite untenable.

    I refer in this connection also to ‘Space, Energy and Creation’, my privately published
    paper, for use on the occasion of a lecture delivered at the University of Cardiff in 1977.
    Copies are available from Sabberton Publications, P.O. Box 35, Southampton SO16 7RB,
    England, the publishing source of this Report. This was a lecture delivered by the author
    as an invited speaker addressing students in the Physics Department at that university. It
    dealt with the subject of anomalous electrodynamic acceleration of ions in plasma
    discharges and explained why this was relevant to the induction of ‘vacuum spin’ which was
    intimately linked with the energy and momentum aspects of creation of stars and planets,
    as well as thunderball and tornado phenomena. The basic physics of ‘vacuum spin’ are
    there presented in a concise way for easy assimilation by students. The lecture paper also
    explains how ‘vacuum spin’ can stabilize the axial discharge and pointed to some surprising
    experimental work by Vonnegut on that subject. Later in a note at the end of this report
    text from the last page of that lecture paper is reproduced for the reader’s interest.

    It is also noted that an important updated section of the theoretical analysis in that
    paper has recently been incorporated in my new book ‘Aether Science Papers’, now
    available from the publishers of this Report.

    It will be evident from this that the now-emerging technology for generating power
    from space energy is destined eventually to upset the physics world and particularly
    cosmologists. Instead of exercising their criticism to block the breakthrough developments
    on the new energy front, they need instead to look to their own problems, as now exposed,
    because they have invested so much time in futile theoretical pursuits that now come under
    attack.

    The Correa Project

    Essentially the core element of the Correa apparatus is an electrical discharge tube
    containing a rarefied gas. It is a tube having a special construction but which can be
    manufactured in much the same way as a fluorescent lamp. Its objective, when used in a
    special circuit, is not the emission of light but rather the generation of electrical power in
    excess of the input power needed for its operation.

    This seemingly impossible feat is proved by providing a battery of electric d.c. storage
    cells large enough to deliver a high enough voltage to trigger the discharge which in turn
    feeds output to a separate battery of d.c. storage cells which store the electrical energy
    generated. Since the generation of electricity is the objective there can be no better way of
    proving that, over a period of time, the net energy output exceeds by far the net energy
    input. Measurements of instantaneous power and the energy transients can reassure an
    investigator that there is a power gain but sustained performance conditions are essential
    for a definitive proof. Indeed, this will be better understood when the principle of operation
    is explained. The pulse of energy input is ahead of the output pulse in time-phasing, owing
    to the intervening opening of the gate, otherwise described as the radial electric field, which
    allows entry of energy from the quantum activity of the vacuum field. The battery tests,
    repeated during a succession of charge and discharge cycles, using two banks of cells, one
    charging on output power as the other discharges input power, provide indisputable
    evidence of a substantial gain in power. This gives a verifiable accounting of an energy
    inflow that can be put to good use while enough energy is returned to sustain operation of
    the system. Though a cumbersome part of the overall apparatus in comparison with the
    small and light-weight tube, which is the heart of the system, such a battery of conventional
    electric storage cells satisfies a research need, but ultimately, since power feedback should
    make the device self sustaining, one can foresee a compact product not requiring these cells
    and which operates to deliver electric power, as if from nowhere.

    Now, our world of technology is not really ready to accept such a claim and no amount
    of technical comment here concerning the specific structure of the Correa apparatus can
    sway the minds of a professionally qualified engineering and scientific community, well
    indoctrinated by their teaching and by their experience to require conformity with the well
    established laws of thermodynamics.

    It goes without saying that one simply cannot get energy from nowhere and so there
    are only two issues to confront. Firstly, does the Correa apparatus really deliver what is
    claimed? That is a question of fact which needs the testimony of those witnessing
    demonstrations and able to judge what they see. Secondly, given that the Correa invention
    does deliver excess power, how are we to come to terms with the need to understand the
    true source of that excess energy? To be sure, the answer is not to be found in physics
    textbooks and such textbooks are not noted for disclosing unsolved mysteries. Yes, they
    do tell us that there is still a mystery concerning the force of gravity, which we all know
    should somehow find unification with the theory of electromagnetism. However, gravity
    is something everyone of us contends with every waking hour of our lives. It weighs upon
    us physically, if not mentally, but there are forces and actions seated in the energy
    background that are revealed only in a spurious way or come fleetingly from unusual
    experimental conditions. These are not recorded in our physics textbooks, because those
    who write such books write only about topics they understand and can explain by accepted
    theory.

    This, therefore, is why this Report is being written. We need to understand that source
    sufficiently to be able to do onward design work and develop the Correa invention. We
    need to understand it in order to reassure those who manage and invest in new energy
    technology, because there has to be scientific certainty underpinning any R & D venture that
    is not funded as a mere academic speculation. The latter is the province of the funding
    resource assigned to university and to government research institutions and those
    responsible for such funding are very careful indeed in ensuring they avoid controversy by
    not investing in projects which their peers may ridicule.

    The Correa project is now the trigger for taking forward the theme of some earlier
    research findings, notably those of Geoffrey Spence, a researcher in U.K. who has
    demonstrated an operable `over-unity-performing’ discharge device to sponsoring interests,
    but whose device was presumably impractical in requiring heavy magnets to guide the
    discharge in a kind of cyclotron spiral orbit. There is also the research of Professor
    Chernetskii in Russia and possibly even the work of Tesla to keep in mind, but it is the
    research of Dr. Paulo Correa and Alexandra Correa that has been disclosed in sufficient
    detail to warrant attention at this time in view of the immediate prospect it offers for rapid
    technological development. Later in this Report such background activity will be
    reviewed because the several earlier findings lend support to the Correa project, but the
    immediately following sections of this Report will be devoted to presenting a scientific case
    concerning the true source of the excess energy generated by these plasma discharge
    devices.

    To conclude this introduction to the Correa project, it is noted, by way of a summary,
    that the apparatus involves a cold-cathode electric discharge with current flow between
    anode and cathode producing an axially-directed electrodynamic compression force which
    squeezes positive ions into a ball of plasma trapped against the cathode. The electron
    current from the cathode delivers the negative electrons at a rate which is overwhelmed by
    the ion discharge pulse and the powerful ball of positively charged plasma can build up
    enormous radial electric field gradients which induce equally enormous cancelling electric
    field gradients owing to a spin reaction set up in the vacuum medium.

    The vacuum reacts by propagating waves when responding to transverse electric fields
    around a radio antenna. However, whereas the latter promote such wave propagation
    according to Maxwell’s theory, the vacuum spin provides a contained quasi-stable field
    condition which draws energy from the phase-lock of the quantum spin states of the
    enveloping aether field. The analogy we see in nature is the creation of the thunderball
    which research findings show to have electrical energy densities of the order of 109 J/m3
    stored in their plasma forms. Some of the pulses in the Correa experiments operated at low
    pulse frequencies are found to contain energy of one thousand joules or more. With a 2 cm
    electrode spacing defining a plasma as having a volume of cubic cm. order, this gives 109
    J/m3 as an energy density, clearly of the same order as is reported from thunderball
    investigations. It has, incidentally, been reported that a thunderball was once seen to enter
    a barrel of water and dissipate itself leaving the water at an elevated temperature. From the
    data collected its energy density was estimated.

    However, we can now see from the Correa research findings that the trapped energy
    can be deployed into electrical power output and so measured as it is shed by an output
    pulse and then more energy can be regenerated repeatedly at the pulse frequency. The
    Correa data indicate an inverse relationship between the energy output per pulse and the
    pulse frequency, given a sustained input voltage and input current. Therefore, much of the
    Correa research has involved examining different electrode configurations, gas fillings and
    pressures, as well as different electrode materials and operating conditions, all with the
    object of determining which give the best power gain. Such data is presented in the Correa
    patents and the technical description which will be given later in this Report is directed not
    to the specific technology options, but rather to the disclosure of what is relevant to
    understanding what governs the access to the vacuum energy source.

    The Root of the Problem

    It is basic to the teaching of Newtonian mechanics that momentum is conserved when
    energy transfers between particles in motion. Yet Newton’s laws were formulated before
    the electrodynamic action between charged bodies had been discovered and before it was
    known that all matter is composed of fundamental particles which are electrically charged.
    Scientists today declare that a substantial portion of the matter forming material bodies on
    Earth is really attributable to `neutrons’, which supposedly have no electrical charge.
    However, the neutron exhibits a magnetic moment that betrays the presence of electrical
    charge in its composition and all we really know about the properties of a neutron apply to
    something that only exists as an unstable particle having a mean lifetime of the order of 15
    minutes. It is mere hypothesis to suggest that neutrons exist alongside protons in atomic
    nuclei and so exist as a major component of matter. In fact, beta particles (electrons and
    positrons) have a stronger claim to a presence in atomic nuclei and these can serve with
    protons to account for all the properties of the atomic nucleus.

    Essentially, the point made here is that Newton devised his laws without taking proper
    account of the electrodynamics of interacting charges and the fact that all matter, even
    matter we see as electrically neutral, comprises nothing other than such charged particles.

    In the electric discharges of the Correa apparatus we have a scenario where heavy
    atomic ions, rather than mere electrons, are also the charge carriers. A rarefied gas, such
    as argon, in the discharge tube is ionised and the heavy positive ions are pulled one way by
    an electric field, whilst the electrons go the other way. The current flow is that of electrons
    in one part of a closed circuit but at least partially that of heavy ions in another part of the
    same closed circuit. To understand the physics involved, we need to know whether
    Newtonian principles hold valid in such a case and whether even standard electrodynamic
    principles hold valid having regard to the fact that their empirical basis is not the testing of
    current circuits where heavy ions flow in one circuit segment and electrons flow in another
    circuit segment.

    There are undisputed and unexplained anomalies of record in the science literature
    concerning the very substantial cathode reaction forces set up in what has come to be
    termed a cold cathode discharge. These have been mentioned already but the Correa patent
    specifications reference several other sources and the data provided in the Correa patents
    include measurements of such forces in the apparatus tested by Dr. Correa.

    In the cold cathode discharge thermionic emission of electrons from the cathode is
    avoided and an electric potential set up between anode and cathode is relied upon to trigger
    the discharge. Ostensibly, it seems that there is a force acting on the cathode with no
    counterpart force acting on the anode

    The root of our problem then has two offshoots, one being the Newtonian origin of the
    principle of conservation of momentum and the other being a feature of accepted
    electrodynamic law that says that interaction forces act on charge at right angles to their
    motion.

    There is contradiction of principle here and virtually all physics textbooks contrive to
    avoid discussion on this enigmatic problem. If an electrodynamic force acts on charge at
    right angles to its motion it cannot do any work. This means that there can be no exchange
    of energy with the field background owing to that interaction and other than the energy
    deployments that arise from electrostatic potential. It means that physics theory obscures
    the process of electromagnetic induction by relying on an incompatible mixture of empirical
    formulations which serve us well in engineering design, provided we do not trespass into
    territory outside the scope of the empirical protocol relevant to our problem. The Correa
    invention lies in that outside territory because the current circuit through the discharge tube
    is not one involving a closed all-electron flow such as was used in one or other of the
    interacting circuits that gave basis for the accepted empirical data.

    It is well accepted that if there can be any breach of the principle of conservation of
    momentum then there is scope for gaining, or losing energy, anomalously, in seeming
    contradiction with the principle of conservation of energy. However, one needs to be
    careful to be sure that one is looking at a total system. If the field background contains
    energy, even the energy stored by magnetic induction, it must participate in the energy
    conservation process and that field background is not something we can isolate as belonging
    exclusively to a particular charged particle or a particular current circuit. There is enough
    energy activity in the vacuum (the aether) owing to its intrinsic charge motion that underlies
    the quantum control of atomic electrons to assure the buffer needed to keep faith with the
    law of energy conservation, whatever anomalous forces are developed in any apparatus we
    can build.

    In the university teaching of dynamics as evidenced by a textbook by an author in
    Cambridge, the seat of learning attended by Isaac Newton, and published by Cambridge
    University Press, the principle of conservation of momentum is deduced by the preliminary
    assumption that internal actions and reactions between particles are equal and opposite in
    pairs
    . It is as if each and every paired combination of particles interact with one another
    without any dependence upon anything else. This is manifestly not the case for the
    electrodynamic interaction because electrodynamics has a dependence upon motion relative
    to a frame of electromagnetic reference, something totally absent from Newtonian
    mechanics.

    When Einstein tried to bring conformity between inertial and electromagnetic effects
    his transformations of the space and time dimensions led him to the Lorentz force law,
    which prescribes that the interaction force between two electric charges in motion is not
    directed between the two charges as internal actions and reactions between particles that
    are equal and opposite in pairs
    . This condition is only met where the two charges travel at
    the same speed side-by-side along parallel paths and this clearly is not the case for the
    discharge current through the tubes used in the Correa apparatus. An electrical discharge
    likes to form a kind of filamentary current with charge travelling in line, each ion or electron
    following behind its like form but the negatively charged electrons dodging around the
    heavy ions or even replacing electrons in the atomic ion and neutralizing its state.

    It follows, therefore, that, whether one relies on the principles of Newton or Einstein,
    or both, these being the accepted doctrines, the resulting theory will have no certain bearing
    on the practical situation encountered by the Correa research.

    This means that, with the vast majority of scientists all conforming with the restrictive
    disciplines of physics that confine knowledge to conventional technology, those few who
    venture into the new energy world with an open mind confront some very significant
    opportunities.

    So, first and foremost, we must be prepared, when considering certain very special
    situations in electrodynamics, to go against the teachings of our profession and pay
    attention to the messages in the experimental findings disclosed by the Dr. Paulo Correa and
    Mrs. Alexandra Correa.

    The earlier messages about anomalous electrodynamics which this author found in the
    many scientific papers of record were sufficient reason for investigating where the errors
    had crept into our theories. The author discovered how energy is stored by electromagnetic
    inductance within a metal body and how it is later retrieved from within that conductive
    material. This provided the onward inspiration for questioning how the electrodynamic
    interaction between two electric charges in motion is affected if they have the same charge
    magnitude but different mass. There was, in the metal, a magnetic field reaction which was
    not properly factored into the diamagnetic state as analyzed in conventional theory.

    It was in fact ignored, because energy was not the focus of attention in the use of the
    Lorentz field formulation, but if its energy role had been duly noted and incorporated in the
    theory of the steady field situation, it would have given explanation of the factor-of-2
    anomaly that became known as the g-factor. This is a phenomenon of charge in orbital
    motion, but theoretical physicists sought to solve the problem by inventing what they called
    ‘spin’, even though a charge which `spins’ is not moving its centre of charge and so its field
    is not changed by a changing spin condition. There are angular momentum issues involved
    here in relation to magnetic moments and the g-factor was measured in solid metal rods by
    the ratio of these two quantities. The essential step needed to explain that factor-of-2 in
    terms of orbital reaction of electron motion required taking the argument from within the
    metal to the external vacuum field. There has to be in the aether the same basic g-factor
    reaction as applies within a metal conductor and this clearly points to the g-factor reaction
    being at the heart of the field energy storage by magnetic induction. The aether and its
    angular momentum properties as well the thermodynamic properties associated with the
    activity of its charge composition cannot just be brushed out of sight by a flourish of the
    mathematician’s pen.

    I interject here the comment that I am not unaware of the anomalous g-factor account
    afforded by Q.E.D., the theory of quantum electrodynamics. This is regarded as being the
    only theory of relevance on the subject of electron dynamics, because it can explain the g/2
    factor of the electron as being 1.001159652. It involves copious mathematical exercises
    that are far too extensive to be fully worked through and documented to that precision in
    any textbook. Indeed, as the successive advances in precision measurement crept to this
    quoted value, the theoretical physicist was always found to be lagging behind in trying to
    work through to the next iteration in the calculation. If, on the other hand, the reader
    would like to see a derivation of the factor 1.001159652200 fully presented in only three
    printed pages, the reference is the Lett. Nuovo Cimento, 32, 114-116 (1981), this being a
    well known English language periodical published by the Italian Institute of Physics which
    was noted for its rapid publication of new scientific contributions.

    A later very relevant reference on the same theme, but more closely connected with the
    energy source we are concerned with in the Correa invention, is my paper entitled
    ‘Fundamental constants derived from two-dimensional harmonic oscillations in an
    electrically structured vacuum’, which appeared in Speculations in Science and Technology,
    9, 315-323 (1986). This paper, as the title implies, referred to synchronizing constraints as
    between aether charge in its quantum activity as part of the vacuum medium. The analysis,
    which is quite brief, is also reproduced in my new book ‘Aether Science Papers’.

    As energy is ‘lost’, as by thermal radiation into outer space, it is absorbed into the
    quantum activity of a two-dimensional oscillating system. There is equipartition of energy
    as between charge displacement and kinetic energy. Now, if this energy system of the field
    medium is caused to move in one region relative to another region of that same medium,
    this invokes that constraining action because the aether charge is kept in synchronized
    motion at a universal rhythm, the photon frequency at which the surplus energy can
    materialize as electron pairs or heavy electron pairs (the latter being otherwise known as
    `muons’).

    If, on the other hand, one interferes with this activity by producing a positively charged
    cluster of ions, this sets up a radial electric field and forces radial charge displacement in
    that aether medium. This would upset the timing as each displaced element of aether charge
    in its quantum orbit moves faster about the centre of the orbit in one half cycle and then
    slower in the next half cycle. However, the synchronizing power coupled to all that energy
    in the aether resists that and assures a perfect phase-lock with the result that, to hold
    smoothly in that state, the whole system of aether charge has to rotate about the centre of
    that radial electric field. The glow discharge in the Correa tube becomes the seat of what
    this author has called ‘vacuum spin’. Such a spin condition derives its power by drawing
    on energy from the universal field system enveloping the glow discharge. In other words,
    the action promotes the inflow of aether energy from outer space.

    The key to all this is that synchronizing influence or phase-lock that is at the very heart
    of quantum theory, this being a theory that represents the properties of the harmonic
    oscillator and is governing at the microcosmic level where individual electron motions are
    coupled to the action quantum. Planck’s constant is, in fact, determined by the structural
    form of the array of aether charge which constitutes the elusive, but real, medium we call
    the ‘vacuum’.

    This link between the vacuum medium or vacuum energy field and electrons is crucial
    to our problem of tapping energy from what we see as empty space, but to get things
    started we need to set up that positive core charge. Here, rather than just using electric
    field effects to pull electrons out faster than the positive ions can make their way to the
    cathode of a discharge tube, we find that the action can be augmented electrodynamically
    as a function of current discharge.

    The heavier mass of the positive ions helps enormously in making them more sluggish,
    but it needs real force to compress those ions into a positive ball of plasma and here the
    anomalous electrodynamic interaction forces along the current axis become effective.

    It is a curious fact of accepted physics that the interaction forces between two charges
    in motion are assumed not to have any dependence upon the mass of the particles
    transporting those charges. We use Newtonian mechanics to argue that momentum has to
    be conserved, momentum depending upon mass, but somehow eliminate mass from the
    electrodynamic problem. Why then should we be surprised to hear that when experiments
    are performed involving charge interactions between heavy particles and light particles,
    atomic or molecular ions and electrons, we encounter energy and momentum anomalies?

    The very substantial anomalous cathode reaction forces observed in reported
    experiments indicate that a powerful force is exerted on the cathode with no counterpart
    reaction on the anode. They indicate, by theory alone, that energy is being shed by the
    inductive system in excess of that supplied when the discharges through the device are
    pulsed. However, the Correa research gives us the experimental proof.

    As an aside here, it is mentioned that the energy source is much the same as that
    already discussed in Report No. 1 in this series, where the author has pursued his interest
    in ferromagnetism to show that the energy set up inductively in a gap between two magnetic
    poles can exceed the energy input to a magnetizing winding. The energy source in the latter
    case is the quantum priming of the electron motion in the atoms in the ferromagnet.
    However, in the Correa situation, access to that energy is more directly associated with the
    motion of the underlying electromagnetic frame of reference. In a sense, the quantum world
    involves microscopic orbital motion of a charge system constituting the vacuum medium
    at a very high frequency, the Compton electron frequency, whereas superimposed on this
    there is a low frequency rotation of a very extensive electromagnetic system. Both of these
    motions feed the anomalous energy to the Correa apparatus.

    The Earth would have to stop rotating and to arrest its translational motion with the
    local galaxy before the energy resource harnessed by the electrodynamic action in the
    Correa apparatus can be exhausted. However, the energy of the quantum activity at that
    Compton frequency will never be exhausted, simply because the rest condition of the
    vacuum medium is one of negative electric potential and the absolute ground state cannot
    go sub-zero anywhere. Then, because energy is conserved overall, we must have activity,
    meaning motion of charge, which keeps the charge displaced to positions of positive finite
    potential in which its motion stores additional energy, the fluctuations of which give life to
    the universe.

    Now, physicists, except at least for this author who is also professionally qualified as
    a physicist, are locked into the belief that momentum as well as angular momentum are
    conserved, meaning that an isolated system cannot by its internal interactions develop any
    angular momentum or linear momentum. For this reason, so far as they are at all interested
    in the problem, they have been very perplexed by the fact that the solar system has angular
    momentum that is not zero. Indeed, the Sun and the planets all rotate in the same sense and
    so the Sun must have been created in a rotating state before it shed matter to form the
    planets. By standard physical theory this is not possible but it is nevertheless an indisputable
    fact. How then have cosmologists come to terms with this problem? It is all too easy to
    say that the angular momentum was there, shared by matter in its galactic circulation, before
    that matter condensed to form the Sun, but that says nothing about how it all started. One
    hypothesis was that another star grazed past the Sun to set it in rotation and in the process
    disperse the matter that condensed to form the planets. Yet when the chance of this
    occurring was estimated it was found so improbable that of all the stellar systems in the
    universe the solar system would possibly be unique as the one having planets. Another
    hypothesis was that all the stars were created together in a Big Bang and were so close at
    the time that they could exchange angular momentum and so move outwards in a spinning
    state.

    What is not seen as possible by accepted physics teaching is the acquisition of angular
    momentum and linear momentum as energy was fed into the nucleating star to create it.
    Energy transfer from ‘somewhere’ surely implies that momentum and angular momentum
    can flow in from that same ‘somewhere’. So it seems very logical for some of us to be open
    to the possibility that somehow Nature has a way of breaking faith with what we have
    adopted as the laws of physics, because, as surely as the Sun was created, there is a physical
    process that is non compliant with our modern physics teaching.

    The author submits that it was the initial onset of gravitation that triggered creation and
    caused the dispersed electric charges in the universe to condense to form stars, in much the
    same way as ferromagnetism appears in iron as it cools through its Curie temperature. This
    brings into account the anomalous transfer of energy and momentum to matter. The heavy
    protons would converge to form the stellar nucleus before the lighter-mass electrons could
    come together to neutralize the star so formed. The electrodynamic interactions between
    electrons and heavy ions during this primordial period would set up the linear momentum
    of the star and the transient radial electric field in the conductive plasma would develop the
    vacuum spin which imparts the angular momentum.

    The technological discovery evident from the Correa research is therefore giving direct
    evidence of the anomalous electrodynamic force interactions between heavy ions and
    electrons, which go hand in hand with anomalous momentum and anomalous energy. The
    physics involved in such research is much closer to the subject of energy powering the Sun
    than is the physics of nuclear fusion.

    Readers who decide to look up their book references on Newton’s laws should consider
    the right way and the wrong way of presenting those laws in the light of our knowledge of
    electrodynamics. Newton himself, if he were alive today, would surely be prepared to
    restyle his argument if, by doing so, he could adapt the laws to extend their cover beyond
    macroscopic mechanics and embrace the microscopic dynamics of electric particle
    interactions.

    Firstly, note that it is Newton’s third law of motion that is in issue, the balance of action
    and reaction. Newton combined this law with the principle of conservation of energy and
    was able to deduce that two particles, not necessarily having the same mass, would emerge
    from a collision with their relative velocity reversed. In sharing their energy the velocities
    of the two particles have to adjust so as to assure that they separate from the collision with
    a relative velocity that is -1 times their relative velocity just before impact. This is known
    in mechanics as ‘Newton’s rule’.

    Secondly, note that it is logical that if two conditions determine a third condition then
    that third condition taken with one of the two conditions can determine the other of the two
    conditions. If, therefore, Newton had taken his ‘rule’ to be his third law, especially as it is
    more easily demonstrated, as by propelling a metal ball into another at rest and observing
    that it transfers its motion to that other ball, then he would have got things the right way
    around. The new law would be a ‘law of relative motion’ and, taken together with the
    principle of energy conservation, one can then deduce that action and reaction follows ‘as
    a rule’.

    Thirdly, given this latter entry to the physics of electrodynamics, one can give support
    to the ‘law of relative motion’ because electric charge interactions are dependent upon
    relative position and so upon relative motion, but not dependent upon mechanical inertia.

    Fourthly, however, we have a new scenario once electrodynamics get into the act,
    because whereas a pure mechanical system involves the summation of discrete collisions
    between pairs of constituent particles, which only see their own energies as involved at the
    instant of collision, the case is entirely different for the electrodynamic interaction. The
    reason is that there will invariably be numerous other electric particles in motion in the
    immediate locality of the colliding charges. The conserved energy is not exclusively that
    of a collision between a discrete pair of charges.

    In the latter situation the derivation of the ‘rule’ that action and reaction are always
    equal will fail. Energy will always be conserved but one cannot in this case formulate the
    relevant energy exclusively in terms of the square of a relative velocity. In mathematics
    every square power of a quantity has two roots, one positive and one negative, which is
    why we see the relative velocity of two colliding balls reverse after their impact. It is all a
    question of mixing vector and scalar quantities. Energy is a scalar quantity but velocity is
    a vector. We can take numerous particles conforming with linear vector equations and add
    their individual contributions to determine the overall state of a combined system, but once
    we start changing those vectors by working out the square roots of component scalar
    energy quantities, without being able to exclude the external cross interactions between
    charges acting on the two in collision, we really are headed for trouble.

    The well proven laws we have accepted for mechanics cannot be applied to practical
    situations where there is a dominant electrodynamic effect involving the interaction of
    electrons and heavy ions.

    This rider has been added because Nature contrives to deceive us in a rather curious
    way when we apply the Newtonian philosophy to the electrodynamics of the closed circuital
    all-electron current flow. We find we can use the Lorentz force law which does not
    conform with Newton’s law of action balancing and reaction and apply this to all the
    discrete elemental current circuit interactions to find in the end that they sum to give the
    balance needed to satisfy Newton’s law for the circuit as a whole.

    This is a quirk of the mathematics of this situation combined with the fact that the
    Lorentz formulation prescribes force on charge in motion acting at right angles to that
    motion. A force so directed can do no work and so the summation of all the individual
    interactions will result in no work being done, meaning that the circuit carrying current is
    not giving or drawing power from its field environment. It can therefore not assert force
    on that environment and so internally its action overall must balance its reaction. Yet, as
    soon as we change that current, there is inductive energy exchange with that field
    environment, which means that somehow the forces on the electrons moving through that
    circuit are no longer at right angles to charge motion. Electric fields have been set up by
    induction effects and the moment these are introduced one is bringing into play empirical
    rules, all of which have been discovered by experiments where at least one of the interacting
    current sources is all-electron closed circuit flow or its equivalent.

    Once one departs from the latter constraint one enters a realm needing new physics
    tailored to the problem of electrodynamic interaction between heavy ions and electrons,
    because the mass of the charge carrier has to play a role in the dynamics of force-producing
    situations. In fact, analysis shows that it is the mass ratio between two interacting charge
    carriers that is the dominant consideration and it so happens that in the all-electron current
    flow circuit this ratio is unity, thereby disguising its true relevance in an electrodynamic
    formulation. Once that ratio is measured in thousands as is the case the heavy ion to
    electron mass ratio, then we enter a whole new scenario, the one into which the Correa
    research has ventured.

    In summary, therefore, the root of the problem of understanding why the Correa
    apparatus actually works is intermeshed with the basic principles of Newtonian mechanics
    and their inadequacy in coping with the conditions peculiar to the electrodynamic
    interaction. To adjust our theories to the facts of the Correa experiments and at the same
    time bring conformity and unification into the connection between Newtonian mechanics,
    gravitation and electromagnetism, we need to correct the empirical law of electro-dynamics
    so that it embraces the interaction between heavy ions and electrons. There has to be a
    mass term in the law of electrodynamics.

    This author derived the inter-electron interaction law nearly 40 years ago and the
    version with the mass ratio term some 30 years ago but it was not until 1969 that its
    derivation featured in a scientific paper as published by the Journal of the Franklin Institute.
    This was referenced in the earlier introduction. The law thus formulated in no way conflicts
    with the accepted Lorentz law when applied to the same problems, those involving closed
    circuital electron current. It indicates powerful anomalous forces on heavy ions flowing
    between electrodes in a gas discharge tube where the current circuit is completed by a
    partially closed electron circuit. These anomalous forces generate the build up of electric
    charge at the cathode which establishes the condition needed to trigger excess power output
    from the Correa discharge tube.

    The Dilemma Confronted

    Hopefully the reader will now join the author in confronting the dilemma which has
    been introduced in the foregoing pages.

    We have on the one hand certain anomalous facts of experiment which have been
    building up over many years and are now crowned, not so much by the Correa discovery,
    but by the fact that the Correa patents disclose so much experimental data that technologists
    have the way charted to begin to invade the new energy world.

    We have on the other hand a well established scientific belief system enshrined in
    notions of the so-called Big Bang creation of the universe and the notions of a relativistic
    four-dimensional space-time metric which aims to destroys belief in an aether brimming with
    energy.

    We have intermediate these extremes the knowledge that physicists and cosmologists
    openly admit that they are still searching for their Holy Grail, the Unified Field Theory, by
    which they mean a theory conforming with the Einstein doctrine but yet bridging the gap
    between electrodynamics and gravitation.

    Supplementary to this we have the very extensive theoretical contributions of this
    author, all built on the revision of physics resulting from acceptance of a vacuum energy
    medium and that law of electrodynamics as adjusted to permit anomalous force imbalance
    in the interaction of moving electric charges of different mass. The author’s theory is an all-embracing unified field theory.

    In the appendices which follow this text many of the relevant references will be listed.
    The author has come to realise that the scientific community is so entrenched in its
    dedication to the Einstein doctrine, which is linked with the Lorentz force law, that no
    amount of contrary reasoning based on new theory will be heeded.

    This is why the experimental discovery demonstrable by Dr. Paulo Correa and Mrs
    Alexandra Correa is of vital importance, not just as a way forward which offers us direct
    access to a new source of energy, but for its scientific significance.

    The anomalous cathode reaction forces discovered and duly recorded by past
    experimenters have been swept aside by physicists with the presumption that there must be
    sufficient electrode vaporization to explain the cathode reaction force. That process would,
    of course, impart a back pressure on the vapour that would assert a balancing force on the
    anode. However, I am not aware of any tests that confirm the balancing reaction on the
    anode and I find it difficult to understand how vaporization of a metal can impart much
    more energy to the freed atoms than is implicit in the latent heat of vaporization. See the
    later comment on this point.

    The generation of excess energy in seeming breach of the law of action and reaction
    is the decisive factor in determining the scientific truths involved in this situation and the
    Correas have taken us forward decisively on that front.

    What lies ahead, therefore, is not only the entry into a new energy regime, but the
    prospect of a developing thrust in aerospace applications and a scientific revolution as the
    extremes of modern philosophy in physics collapse into a more rational picture.

    The dilemma the reader now faces is whether to do nothing and simply watch events,
    leaving the task to others, or whether to explore and probe the Correa claims to try to trace
    a flaw in their experiments, (if there is one to be found!) or whether to stand by the principle
    that what amounts to ‘perpetual motion’ is impossible and so pass judgment solely on the
    strength of that conviction.

    It may or may not help the argument to say that each atom in the reader’s body
    exemplifies the reality of ‘perpetual motion’, because if the reader were to die and be cooled
    down to the absolute zero of temperature, minus 273oC, the electrons in every atom in the
    reader’s body would keep moving perpetually. All that is suggested by the new theory
    underlying the new energy science is that, by understanding the quantum activity of the
    aether and showing how it determines the Planck constant and regulates electron motion
    in atoms, we can see a way forward to tapping into that energy system.

    The obvious challenge comes in the statement: “Prove it by demonstrating something
    that works.” Well, the Correas have done that! Yet even that will not be enough to turn
    the scientific world upside down, because the cry then is: “Where does the energy come
    from?” Well, the author has outlined the explanation above! Yet that will not be enough,
    because the scientists who are able to judge the theoretical arguments would rather not
    waste time in that effort, being so confident that there must be flaws.

    So, assuming the worst case scenario, the final arbiter is likely to be the public at large,
    those who care little about where the energy comes from, so long as it is cheap, plentiful
    and non-polluting. That means that we will need to see several technologies develop all
    generating energy in a way which confounds the physicists but all aimed at the domestic
    market or the small user, such as by providing back up power to keep electric batteries on
    a boat charged when the boat is not being used.

    In saying this, the author is aware of initiatives around the world and particularly in
    Japan which do seem to be backed by adequate funding and which suggest a lower level of
    prejudice against the new energy theme. So it is really now a question of waiting to see
    how the situation develops, trusting that what is explained in this Report will contribute to
    forward developments.

    Even though we will hear much about new technology in the form of anomalous heat
    generation at water temperatures and new motor technology in which magnets play a
    dominant role, all pointing to excess energy generation, whichever technology is the first
    to command enough attention to convert Establishment opinion from disbelief to belief will
    pave the way for acceptance of the other technologies.

    In this race, the Correa technology has a distinct advantage in that it is already the
    subject of three granted patents in USA, in that the claims of these patents cover, quite
    broadly, the three key aspects of what the Correas have discovered and in that the scientific
    basis is seated in anomalies long recognized in scientific literature by authoritative
    institutional researchers.

    This is why much of the remainder of this Report simply gives references and abstracts
    as there seems little that needs further explanation other than the provision of a brief
    description of the way in which the Correa tube taps aether energy.

    As with any technological development introducing a new electronic device it will be
    necessary to involve experts in the design for mass production, with particular attention to
    the problem of enhancing electrode lifetime. There will be scope for more invention in
    improved design structure of the electrode configurations and choice and composition of
    electrode materials, but there are some new and interesting principles embodied in the
    patented Correa apparatus and these should survive and have value as the technology comes
    into commercial use.

    I am writing this section of text on February 13, 1996 and have just received my copy
    of the February 1996 issue of New Energy News, a monthly publication edited by Dr. Hal
    Fox and issued from a postal address P.O. Box 58639, Salt Lake City, UT 84158-8639.
    At pp. 10-12 I summarized the 1 hr documentary shown on British television on 17th
    December 1995 concerning ‘free energy’. That programme included reference to research
    on energy from plasma discharges, notably by reference to the research of Professor
    Chernetskii in Moscow, but the programme was compiled before news concerning the
    Correa discovery came to light. Accordingly, in my submission to New Energy News I did
    refer to the Correa research in Canada. I find, incidentally, that the editor, Hal Fox,
    interjected the additional note that “The work by Kucherov, Karabit & Savitimova has also
    shown excess heat generation from a `glow discharge’”, but I have at this time no data on
    that subject.

    The remaining body of this Report will concentrate on a simple illustrated exposition
    as to why the abnormal glow discharge in a Correa tube generates excess energy and how
    that energy is taken off as electric power rather than as heat. So far as possible what is
    presented will be extracts from what has already been published on the subject, since it is
    not appropriate to elaborate new theories to explain the operation of the technology
    discovered by Dr. Paulo N. Correa and Mrs Alexandra Correa. The object here is to show
    that the scientific basis of the discovery is something in common with natural phenomena
    that have hitherto defied accepted explanation when the physics was there on record but
    was ignored.

    Operational Characteristics of Correa Discharge Device

    The excess energy mode of operation of a typical Correa discharge device involves
    cyclic current oscillations in the EF region of the operational characteristic depicted in Fig.
    1 of U.S. Patent No. 5,416,391.

    Note that AGD denotes the abnormal glow discharge region. The plotted data show
    how current varies as the voltage between the electrodes increases. There are two regions
    of negative resistance. The one at higher current is used to develop pulsating current
    oscillations which allow excess energy to be drawn from the device.

    To get current to flow between the two electrodes in a cold-cathode discharge tube
    containing a rarefied inert gas such as argon a sufficient voltage, of the order of 1,000 V,
    is needed to initiate ionization. Much then depends upon the circuit connected to the tube
    and the load conditions that can limit the current to certain levels, which in the case of the
    Correa invention hold the current in a stable pulsating oscillation mode. Normally the
    current will climb to the VAD region where the high current vacuum arc discharge
    condition applies. That state does not deliver excess energy output.

    Once the ions are formed (Fig. 2) a flow of current through the tube arises by the
    attraction of electrons to the anode and the migration of the positive ions owing to their
    attraction to the cathode.

    Fig. 2

    Because the heavy positive ions do not move as rapidly in the field between the
    electrodes as do the electrons, there will be a residual positive space charge established,
    particularly adjacent to the cathode.

    This means that there will be a radially directed electric field gradient from the centre
    of the glow discharge. Now, how does the medium to which we attribute electromagnetic
    wave propagation in terms of Maxwell’s displacement current respond to a radial electric
    field? It responds by trying to cancel the plasma charge field, just as it does in a parallel
    plate capacitor. Nevertheless there is a difference. There the electric field is applied from
    outside, namely from the electrodes, and the Maxwell displacement, which comprises two
    separated layers of charge of opposite polarity, simply confronts the charge on the plate
    electrodes and screens it by placing one polarity charge adjacent one plate electrode and the
    other polarity charge adjacent the other electrode.

    In the absence of an electric field vector, the scalar reaction of the aether is to store
    energy by equipartition between kinetic and electric displacement energy by expanding the
    radii of the orbital quantized motion of the elements of its aether charge. This is the basis
    of this author’s theory for the photon and the derivation of the theoretical value of the fine-structure constant [Reference 2 in the bibliography].

    The aether responds to the field vector mode of a linear electric field displacement by
    storing energy as electrical field energy. This amounts to an internal strain in the aether
    and, if a gas is present, it may become ionized. Here there is translational motion of the
    charge system in the aether but no kinetic energy is added to its overall quantum state
    because the displacement of the charge orbits in local aether regions is affected by
    synchronizing constraints exerted from external aether regions which assure phase lock.
    These constraints, rather than the applied electrical field, provide the energy needed to
    sustain that translational motion. This is why the aether cannot be sensed in terms of the
    mechanics of linear motion. This aspect is a subject mentioned in reference [57].

    The aether responds similarly in response to the radial electric field vector, because it
    is able to set up a state of spin or rotation which involves inflow of kinetic energy in the
    aether itself, energy which is supplied from the external aether owing to the phase-lock just
    mentioned. In this case, if the external influence which sets up the radial electric field
    subsides in strength, the phase-lock persists but the kinetic energy which has been fed into
    the field system from the external aether cannot return to its source by virtue of that same
    phase-lock. This is akin to the situation where a dog with its feet firmly locked to the
    ground can wag its tail, but the tail cannot wag the dog and, with it, body Earth. Therefore,
    the energy in the aether spin has to be shed in a different way, but by virtue of the
    synchronizing constraint which now forces a radial charge displacement powered by the
    captured aether spin energy.

    In other words, what is stored in the spin state as aether input energy becomes available
    as electric field energy which can be tapped by drawing power from the electrodes of the
    Correa tube, just as if the glow discharge were a capacitor.

    To do this it is necessary to have pulsations and here there is an aspect which warrants
    further theoretical research, but which seems to have already found a practical solution in
    the Correa device. The point of interest is that, in theory, we need to add as many joules
    of energy to build the electric field condition as we can expect to draw in as excess energy
    from the enveloping aether. This is because the aether has certain harmonious features
    consistent with equipartition of energy between electric and dynamic (kinetic or magnetic)
    states. The puzzle then is that of understanding how energy efficiencies in excess of 200%
    are possible. The answer is easily found if there is a Q factor applicable to the circuit,
    meaning that the electrical energy oscillates as between the discharge and an external
    capacitor. However, it may well be that in the Correa tube the extended form of the
    cathode in relation to the electrode spacing allows multiple discharge zones which can
    cooperate in exchanging some portion of the electric energy whilst the aether energy inflow
    is pumped into all such zones in each external pulse cycle.

    In summary, therefore, there are undoubtedly some very special advantages in the way
    in which the Correa discharge tubes are designed. The design of the electrode configuration
    as covered by the Correa patent position seems therefore to be crucial to securing high
    conversion efficiencies with excess power generation well in excess of 200%.

    What is clear is that the radial separation of positive and negative charge in the plasma
    in the Correa discharge tube will capture large amounts of aether energy. Fig. 3 depicts that
    radial separation and shows two capacitors denoted C which make a circuit connection with
    a load resistance.

    Fig. 3

    The object here is to set up an oscillation in the a.c. output circuit connected in parallel
    across the discharge tube electrodes.

    Suppose that there is an oscillation which allows us to draw a.c. current through the
    load. There will be times when the current through the tube collapses rapidly and this
    means that the current in the discharge drops. The rate at which positive ions are being
    created will drop as well and so the radial electric field can fall below the value
    corresponding to the state of aether spin. This then uses the kinetic energy of the aether
    spin to set up radial electric field displacement in the aether itself and that, in turn, releases
    the plasma charge at a higher potential, corresponding to that negative resistance
    characteristic. The result is that the tube delivers power drawing on aether spin and sheds
    it in those output current pulsations that are channelled around the a.c. shunt loop through
    the load resistor shown in Fig. 3.

    Now, to accentuate this effect, one of the features of the Correa patents involves a
    discharge tube having an extended cathode structure with a relatively small anode in fairly
    close proximity. As indicated in Fig. 4, this has the effect of spreading the cathode current
    and so the distribution of positive ions over the area of the extended cathode, whilst the
    anode current is more confined to the central part of the tube.

    Fig. 4 ……………………..Fig. 5

    Thus, in Fig 5, the way in which current flows through the tube is illustrated by the
    separation of positive ions and electrons. These can recombine, as by the electrons entering
    the anode migrating around the d.c. supply circuit path to find their way to the cathode.
    However, the significant point of interest is that the AGD discharge has a charge storage
    feature which is depicted by the notional capacitors illustrated inside the tube.

    One has then to visualise a region of aether spinning about the centre of that plasma
    forming the glow discharge and contriving to contain the build-up of an enormous amount
    of charge separation. Under the cyclic relaxation control of the suitably-adjusted
    parameters of the external load circuit, the oscillation which develops can literally pump
    energy from the aether as the positive ion state of the plasma is increased and allowed to
    decrease, increasing under control of the power input, but decreasing spontaneously to
    draw on the aether energy stored once the input loses control.

    It is not the purpose of this Report to describe precisely how the circuits of the Correa
    experiments are designed to exploit this phenomenon, but before mentioning other related
    research and before giving further explanation of the physics underlying the spin
    phenomena, one example of the reported performance data will be quoted from U.S. Patent
    No. 5,449,989. That patent together with the other two already mentioned show several
    circuit diagrams to which the reader can refer.

    Performance Data Exemplifying the Correa Discharge Device

    In experiment No. 8 as listed in Table 5 in column 36 of the patent specification, it is
    shown that a battery pack in which 46 batteries, each of 12 V rating, provide an input
    voltage of nearly 600 V. As energy is drained from this battery pack a separate pack of 28
    such batteries is charged by the rectification of a.c. output drawn from the pulsating
    oscillations of the discharge tube.

    The experiment begins with the driver pack at a voltage of 582 V, corresponding to
    12.65 V per cell, which was an 87.5% state of charge. The charge pack had an initial
    voltage of 328 V, corresponding to 11.71 V per cell, which is a 20% state of charge.

    The cathode in the discharge tube was of hardened aluminium and had an area of 64
    sq. cm. There was a 4 cm. gap between electrodes and the gas pressure in the tube was 0.8
    Torr. The experiment ran for 28.5 minutes.

    Thereafter, the driver pack was found to have lost very little of its charge, its voltage
    having reduced to 579.5 V, corresponding to an 84% state of charge. It had shed 0.134
    kWh of energy. In contrast the charge pack had climbed to a voltage of 350 V and become
    76.5% charged, an energy increase of 1.213 Kwh, which is a ninefold increase. The energy
    conversion efficiency was greater than 900%.

    The Vapour Reaction Hypothesis

    The conventional assumption concerning cathode reaction force in the cold-cathode
    discharge is that the discharge involves vaporization of the cathode material. The reaction
    force on the cathode is then attributed to the rate at which momentum is imparted to the
    ejected vapour. The speed of ejection times the rate of loss of cathode mass should then
    equal the measured anomalous force.

    It is therefore interesting to compare that speed, and the kinetic energy it implies for
    an atom shed by the cathode, with the thermal state of such an atom just prior to its release,
    as determined from the latent heat of evaporation of the cathode metal.

    For an aluminium cathode, given that the latent heat of evaporation is 10,800 J/gm, the
    speed of the vaporized atoms can be little more than 5,000 m/s. This is estimated by
    equating the kinetic energy of unit mass to the energy 10,800 J. It follows that, the force
    of 245.2 dynes as measured and reported in Table 15 of the third of the Correa U.S. patents
    for a current of 1.6 A, will require the cathode to vaporize at the rate of 490.4 10-6 gm/s to
    impart the necessary rate of reaction momentum to account for that cathode reaction force.
    This assumes that the force is not set up by electrodynamic action.

    Now, in column 20 of U.S. Patent 5,449,989 the rate of erosion of cathode material
    is discussed on the basis of the Correa data on actual measurements of craters formed by
    vaporization activity. That data allows the conclusion to be drawn that electrodes having
    a mass of less than 100 gm would have a useful life equivalent to the generation of power
    of 40 Mwh.

    Assigning 250 V to the 1.6 A current implies output power of 400 W and, if that were
    to consume the cathode at the rate estimated above, a 100 gm electrode would be
    consumed in 56 hours, corresponding to a lifetime energy production of 0.0224 Mwh.

    This is discrepant by a factor of the order of 1,000 when compared with the erosion
    observed. It follows, therefore, that the cathode reaction force has to be almost wholly
    attributable to some cause other that reaction produced by vaporization. Hence the
    anomaly already discussed in connection with this new source of energy!

    The Author’s 1977 Plasma Discharge Device

    This was the subject of U.K. patent Application No. 2,002,953. It proposed the
    concentration of heavy positive ions in a central chamber by the anomalous electrodynamic
    forces of the cold-cathode discharge, with the object of producing heat in excess of that
    generated by electrical input power. The invention was based on the recognition that the
    aether can shed its `intrinsic’ energy.

    The last paragraph of the specification was:

    “The ion acceleration technique provided by this invention becomes, in such
    situations, a catalyst by which high energy concentration in suitable ionizable
    media may trigger transformations and possibly release of intrinsic energy.”

    The circuit shown in the following reproduction of Fig. 11 of the patent had the merit
    of avoiding cathode overheating by injecting ions into the heat generation chamber and
    subjecting them to accelerating effects produced electrodynamically by auxiliary cold-cathode discharges.

    Spence’s 1986 Energy Conversion System

    This was the subject of U.S. patent 4,772,816. See the figure below of the patent.

    Geoffrey M. Spence of Crowborough in England assembled operative plasma discharge
    devices which generated more electrical power output than was supplied as input.

    The abstract of the patent reads:

    “The apparatus uses a magnetic field (80) to accelerate a charged particle radially
    towards a target electrode (10). The increased kinetic energy of the particles
    enables the particle to give up more electrical energy to the target electrode (10)
    than was initially given to it. This charges the target electrode (10), and the
    increased energy is extracted from the apparatus by connecting an electrical load
    between the target electrode and a point of lower or higher potential.”

    Chernetskii Vacuum Energy Breakthrough: News Release dated 1989

    The Novosti Press Agency, Moscow, USSR issued their Press Release No. 03NTO-890717CM04 in 1989. A few sentences from that document are quoted below:

    “Abstract: A design model of a plasma generator which can convert physical-vacuum energy into electricity has been developed under Professor Alexandr V.
    Chernetskii at the Moscow Georgi Plekanov institute of the National Economy.
    Such generators could lay the groundwork for a future environmentally-benign
    power industry.”

    “Classical physics cannot explain what happens when a plasma discharger placed
    in a Chernetskii circuit is started. For no apparent reason the ammeter pointer
    suddenly shows triple strength of current increase and energy output is several
    times more than input. The plant’s efficiency is suddenly much more than ONE!
    No magic is involved. Additional energy outputs at specific plasma discharges
    have been established in several independent ‘Expert reports’ by staff from the V.
    I. Lenin All-Union Institute of Electrical Engineering (Moscow) of the Ministry
    of the Electrical Equipment Industry. This effect has been checked by different
    methods. Where does this mysterious energy come from?”

    “The self-generating discharge emerges when the discharge currents reach a
    definite critical density, when the magnetic fields they create ensure magnetisation
    of the plasma electrons and they begin to perform mainly cycloidal movement.
    The interaction of currents with their magnetic field forces the electrons to deviate
    to the cylinder-shaped discharge axis and the electrical field emerges. ….. Clearly,
    only part of the tremendous vacuum energy is extracted.”

    “We’ve developed several circuit versions which can find application. In the latest
    experiment which had an input power of 700 watts, the generator produced three
    kilowatt for load resistance, or nearly five times as much. This is only the start
    and not the limit. The calculations for more powerful plants show that many
    megawatts of free energy can be produced from a minimal power source.”

    A Concluding Note

    In this concluding note there are two points which it is believe warrant attention. One
    is quite topical in that it has attracted media interest in the vacuum as a new energy source.
    The paper generating that interest is that of C. Eberlein in Physical Review Letters 76,
    3842-3845 (1996) entitled ‘Sonoluminescence as quantum vacuum radiation’. It is the
    phenomenon by which sonic pulsations applied to water result in the water emitting optical
    radiation which betrays the release of energy in bursts which signify high temperatures. See
    also the report by Peter Knight ‘Sound, Light and the Vacuum’ in News and Views in the
    journal Nature (381, pp. 736-737, 27 June 1996).

    This phenomenon is of little practical consequence when measured against the
    discovery underlying the Correa invention, but it shows that scientists need to face up to
    the reality of the new energy world. The sonoluminescence phenomenon is, in my opinion,
    another manifestation of the vacuum spin scenario. By compressing tiny air bubbles at
    frequencies of 25 kHz the positive H3O hydronium ions and the negative OH hydroxyl ions
    in water converge radially towards each bubble of air during the pressure impulse period.
    The heavier ions respond more slowly and each such pulse sets up a small radial electric
    field displacement. This induces aether spin or vacuum spin, with inflow of energy from the
    quantum underworld, owing to the phase lock action of the quantum environment. As the
    pressure relaxes the ions do not recover their original positions owing to the neutralizing
    field effects inherent in the aether spin. Each successive sonic pressure pulse then augments
    the effect by forcing further radial charge displacement. This is an escalating situation
    broken only when the build-up of vacuum spin energy centred on those air bubbles grow
    in physical size until instability sets in as by surface collision with other such spin states.
    These collisions in their random distribution will be triggered in time with the sonic
    pulsations and local flashes of light will be emitted. In effect what one sees is a kind of very
    tiny thunderball phenomenon, where the stimulus exciting the formation of the glowing balls
    is not an electrical discharge but a pressure wave.

    While physicists ponder on that sonoluminescence phenomenon, those interested in the
    practical pursuit of the new energy opportunity can follow the Correa lead, confident that
    scientists who decry the `free energy’ prospect have their own problems in understanding
    sonoluminescence.

    As indicated earlier in this work, the theme of charge induction by vacuum spin
    featured in my 1977 lecture paper ‘Space, Energy and Creation’ and I stated that I would
    from quote something from the end of that text. This now follows:

    Finally, an interesting experiment has been performed by Ryan and Vonnegut (1971)*. They arranged
    for a cage to rotate around an electric arc discharge at quite low speed and found that this stabilised the arc.
    The task of stabilising an electric arc is one of the major problems of thermonuclear fusion research. It
    seems therefore very difficult to believe that the wild antics of the arc discharge are tamed merely by the
    slow rotation of a column of air. Here then is more scope for research. Can an arc be stabilised by a in a
    vacuum by cage rotation? It is research which the modern physicist will not readily undertake because there
    is widespread belief that the vacuum is a non-entity devoid of any special properties. It is a belief
    encouraged by the development of relativity and in my experience those who believe in relativity deny the
    existence of the aether. On the other hand I was once reassured by a comment Professor Cullwick** made
    about something I published. He quoted Einstein as saying:

    ‘The special theory of relativity does not compel us to deny the existence of the ether ….
    there is weighty evidence in favour of the ether hypothesis.’

    (H. Aspden, 15 September 1977)

    * Nature Physical Science 233 142 (1971).

    ** Electronics & Power 22 40 (1976).

    APPENDIX I

    WHY THE EARTH IS NOT A SELF-EXCITED DYNAMO

    Introduction

    Readers of ‘The Homopolar Handbook’ by Tom Valone will see that it has the sub-title
    ‘A Definitive Guide to Faraday Disk and N-Machine Technologies’. They will also see on
    its page 78 a reference to a Scientific American article which gives weight to such
    technology by declaring that the Earth is a self-excited dynamo analogous to a Faraday disk
    generator which powers the self-induced magnetic field. The article appears in the February
    1979 issue of Scientific American at pp. 92-101. Its authors are Charles R. Carrigan and
    David Gubbins and it is entitled ‘The Source of the Earth’s Magnetic Field’.

    In the December 1979 issue of Scientific American at pp. 120-130 there is an article
    by Lewis P. Fulcher, Johann Rafelski and Abraham Klein entitled ‘The Decay of the
    Vacuum’. This latter article predicts that matter can be created from empty space in the
    close vicinity of the atomic nuclei of high atomic mass.

    One at least of those two articles just quoted is based on a false foundation, but both
    bear upon the subject of this Energy Science Report.

    I make this statement well recognizing the authority of authors who write for Scientific
    American, but knowing that where magnetism and the aether’s energy properties are
    concerned one really needs to be discerning as to what one is willing to believe.

    In this Appendix I it will be shown why the Earth’s magnetic field cannot be self-induced by homopolar induction. Appendix II reproduces my paper as read at an Institute
    of Physics conference at Oxford University, England in 1983. It provides the authentic
    explanation of the induction of the Earth’s magnetic field as an aether phenomenon,
    consistent with the foregoing analysis of operation of the Correa PAGD technology.

    The Logic of My Case

    1. For there to be self-induction of electric or magnetic effects attributable to the rotation
    of any system that system must comprise a composition of electric charges.

    2. The electrostatic force acting between any two charges is directed along the line
    joining them and there is balance of action and reaction, meaning that the system will
    not develop an out-of-balance reaction force that can enhance or retard a state of spin.

    3. If the system is already spinning then there will be mutually-induced electromagnetic
    forces acting on the charges as each moves under the influence of the field set up by
    the motion of other charge.

    4. By the Lorentz force law these forces act at right-angles to the charge motion. The
    effective motion of each charge is in a circular orbit about the axis of spin and so any
    electromagnetic force must be radial with respect to that spin axis. This means that no
    force component will enhance or retard the spin.

    5. It must be concluded that the mutual-interaction of charges within a spinning body
    cannot set up any electromagnetic forces affecting that spin, this being, of course,
    consistent with the principle that angular momentum is conserved in the absence of an
    external influence.

    6. A consequence of this is that there can be no circulating electric current induced inside
    that system owing to its rotation as that would draw on the inertial spin energy and
    mean that the spin speed must reduce.

    7. This account does not preclude the setting up of EMFs in the body of the spinning
    system of charge because those EMFs would be balanced, meaning that the perimeter
    is at a different potential from that at the axis.

    8. In an operable homopolar generator based on the Faraday disk principle there is a non-rotating return current circuit path external to the rotating disk and that accounts for
    the unbalanced EMF around a circuit whilst providing the external structure which can
    absorb the forces affecting the spin speed of the disk.

    9. Body Earth has no external structure against which to apply the requisite force action
    if it is to slow down owing to self-exciting dynamo properties.

    The Alternative Solution

    There is, of course, a solution to the mystery of the Earth’s magnetic field, but it
    depends upon something totally unfamiliar to those expert in the physics of field theory.
    It concerns ‘vacuum spin’ and a ‘phase-lock’ effect and that connects the phenomenon of
    the Earth’s magnetic field with the energy activity intrinsic to the aether. It involves a
    process which taps that aether energy, which is why the subject is important in our quest
    to discover a new and commercially viable source of energy.

    Appendix II should now be read, keeping in mind that the self-generating magnetic
    dynamo theory as an explanation for the Earth’s magnetic field is flawed and must be
    rejected.

    _______

    APPENDIX II

    THE THUNDERBALL – AN ELECTROSTATIC PHENOMENON

    This is the text of the author’s paper as presented at Electrostatics 1983, Oxford and as
    published in Institute of Physics Conference Series No. 66 at pp. 179-184.

    Abstract A quasi-static electric displacement according to Maxwell’s theory is
    considered in a novel context, that of a forced radial electric strain centred on a
    source of energy. The resulting balancing charge displacement in enveloping
    matter may have transient stability and should exhibit ionization if gaseous.
    Potentially hazardous pockets of migrant electrostatic energy may well be created
    in the vicinity of electric discharges. Analysis shows the energy content to be
    within the range applicable to the thunderball, that is between 2×109 J/m3 and
    5×109 J/m3.

    1. Introduction

    Maxwell’s equations are very much a part of the accepted physics in use today. They
    are used without much regard for the physical model on which Maxwell developed his
    theories. Jeans (1966) has referred to Maxwell’s displacement theory as ‘part of the
    scaffolding by which electromagnetic theory was constructed’ but said that it was an open
    question whether this scaffolding ought now to be discarded.

    Some impetus in examining this question stems from the recent experimental discovery
    by Graham and Lahoz (1980) that the field medium can provide a reaction force to quasi-static fields. The evidence from this experiment, which is electromagnetic in character and
    depends upon current displacement between capacitor plates, is so strong that the authors
    ended their paper with the comment that ‘the quasi-static Maxwell’s field is not merely an
    invisible medium of interaction between matter and matter; it has in fact the mechanical
    properties postulated by Maxwell, in contradistinction to any “action at a distance” theory’.

    This encourages the author to present a proposition directly based upon Maxwell’s
    displacement theory. The question at issue is whether the vacuum, as a physical medium
    in its own right, can be set in a state of electrical strain and might, under certain
    circumstances, retain this strain transiently so as to store energy in a quasi-stable manner.
    In particular, it seems worthwhile to ask whether radial electric displacement centred on a
    source of energy has a role to play in physical phenomena. Note that this contrasts with the
    lateral oscillatory displacement we associate with wave propagation. We are considering
    a static displacement such as is associated with the
    storage of energy by a charged capacitor.

    Fig. 1

    We believe that when the parallel plates of the
    capacitor shown in Fig. 1 are electrified, as by the
    potential V, the linear displacement in Maxwell’s field
    medium (depicted by the arrows) effectively neutralizes
    the capacitor charge and stores energy in the state of
    strain in the dielectric and the field medium itself. The
    hypothesis we now address is that Nature may operate
    in the reverse mode, particularly in response to a radial displacement, and somehow sustain
    a state of radial electric strain in the vacuum medium so that it asserts a primary role and
    causes the electric charge in enveloping substance to take up neutralizing positions. Instead
    of the electricity applied to the capacitor causing energy to be stored, we have an event
    accompanied by the injection of energy into the strain storage system of the field medium
    and a consequent electrical adjustment in matter.

    Fig. 2

    A lightning flash is a likely candidate for such an event. Its
    action must be to pinch the discharge into a thin filament in
    which the more mobile electrons concentrate along a core as
    shown in Fig. 2 and set up radial electric strains bounded by the
    inert positive ions.

    If the field medium reacts in some way to preserve this
    strain and store energy in a quasi-stable form for a transient
    period before the electrons and ions recombine, then the
    condition according to the hypothesis outlined above is achieved.
    The resulting pocket of energy optimizes its form to that of a
    sphere and asserts a primary role in keeping the positive and
    negative matter charge displaced, pending eventual decay.

    Fig. 3

    A useful concept giving strength
    to this hypothesis involves an imaginary
    state of spin, which we will term `vacuum spin’. The idea here is
    that if the vacuous field medium were to contain charges capable
    of displacement then it would be feasible to imagine a sphere of
    such a medium rotating as shown in Fig. 3 about an axis through
    the centre of the sphere. The charges would be subject to a centrifugal action and so would be displaced radially. Energy would be
    stored by the spin state and by the radial electric fields induced.
    In the presence of matter such as the atmosphere these fields might
    well be cancelled by ionization and separation of charge in the
    matter itself, leaving only the spin energy. Nevertheless, the spin
    would sustain the electric displacement in the field medium and a transient state of
    ionization pending the eventual dissipation of the spin energy. Hence the vacuum spin
    concept does convey some understanding of the quasi-stable character of the phenomenon
    under discussion.

    2. Theoretical Analysis

    We can proceed to analyze a spherical field system subject to symmetrical radial strain,
    without further recourse to this spin concept and solely by reference to the charge
    separation in matter. Consider a spherical shell of negative charge Q enveloping a uniform
    sphere of distributed charge +Q developed to balance the radial electric displacement. The
    electric energy is simply the sum of three terms. These are (i) the self-energy of the
    distributed charge 3Q2/5R, (ii) the self energy of the charge Q which is Q2/2R and (iii) the
    mutual energy of the interaction between -Q and Q, which is Q2/R. The total electric
    energy becomes Q2/10R, where R is the radius of the sphere subject to the electric strain.

    Given this amount of energy applied to form the spherical object under discussion, we
    know that it will be characterized by a charge Q and a radius R connected by the above
    formula.

    To proceed further it is helpful now to digress a little and consider the possible creation
    of such a spherical object wholly within a larger spherical object of similar character. This
    is possible because we are talking about fields and the field medium and can envisage
    pockets of this medium permeating matter. As depicted in Fig. 4(a), a sub-sphere of electric
    strain is contained wholly within a much larger sphere of electric strain. The strains are
    radial in each sphere and combine to determine the strain energy density at points within the sub-sphere. Because the main sphere is very large in relation to the sub-sphere we can regard the strain of the larger sphere as uniform over the volume of the smaller sphere. This
    means that when the electric strain vectors of the spheres are combined at points within the
    sub-sphere the cross products will balance and so cancel to leave the energy needed to form
    the sub-sphere independent of the strain within the larger sphere.

    Fig. 4(a) ………………………….. Fig. 4(b)

    Fig. 4(a) shows a sub-sphere of radial strain within a larger sphere of radial strain and Fig. (b) shows an ion and its associated electron influenced by a non-linear electric strain.

    Note that we have in mind the possible ionization of matter and the separation of
    electrons and positive ions. The electrons present will, by their thermal equilibrium with
    the ions, have a range of travel well in excess of that of ions with which they are associated.
    Thus, collectively each ion and its paired electron will form a system which is electrically
    neutral overall and which can be represented, as shown in Fig. 4(b) by a positive central
    charge surrounded by a spherically symmetrical negative charge distribution attributable to
    the statistical random motion of the electron. This is because the electrons have a much
    smaller mass and a much greater speed and, though confined to the spherical boundary of
    the sub-sphere in order to balance the effects of the strain discussed above, they are less
    confined than any ions present at that boundary.

    Owing to this greater range of motion of the electrons it is the polarity of the electrons
    that determines the direction in which ionized matter tends to move in a non-linear electric
    field. For stable confinement to a sphere the electric strain everywhere within the bounds
    of that sphere must correspond to the action of a positive charge. Thus the radial electric
    strain of the sub-sphere at its surface is limited by the prevailing electric strain in the larger
    sphere and the latter must correspond to the action of a positive rather than negative charge.

    Fig. 5

    Imagine now that what we have described
    occurs in our own environment, with the Earth
    and its ionosphere constituting the larger sphere
    and the subspheres being the thunderballs induced
    in the Earth’s atmosphere. The Earth rotates, as
    depicted in Fig. 5, and so the charge just
    mentioned would rotate to produce a magnetic
    field attributable to a distributed positive charge
    and a balancing negative charge at the upper
    bounds of the atmosphere. Overall this would give
    the Earth a geomagnetic moment attributable to a
    negative charge, which is found to be the case.
    Furthermore, no electric field

    would be detected directly because the strain
    caused by vacuum spin would be balanced. It is
    well known (Rosser, 1968) that this strain causes no magnetic field itself as, otherwise,
    charged capacitors when rotating would induce no magnetic field, yet such a field is
    observed.

    The magnetic moment attributable to the collective action of a surface charge Q is
    readily shown to be:

    M = QR2/5c

    (1)

    where R is the body radius and is its angular speed. c is the speed of light. Note that this
    expression is in electrostatic units and both the dielectric constant and the magnetic
    permeability are taken to be unity. For the Earth the geomagnetic moment M is 8.1×1025
    gauss-cm, R is 6.4×108 and is 7.26 105 rad/s. c is 3×1010 cm/s. Thus Q is readily found
    and so the surface electric strain Q/R2 as applicable in atmospheric regions.

    This sets the surface strain of the sub-spheres and determines the energy density
    associated with their overall energy. The mean energy density of any such sub-sphere is
    found by dividing Q2/10R by the volume 4R3/3, R now being the radius of a sub-sphere
    and Q its charge. This energy density is simply 3/40 times (Q/R2)2 and as this latter
    quantity is the same throughout the Earth’s atmospheric layer we may expect all sub-spheres
    to have the same energy density.

    It is known that thunderballs all exhibit the same energy density, regardless of their
    size, as was reported by Altschuler et al (1970) and that this energy density lies in the range
    2×109 J/m3 to 5×109 J/m3.

    We have, therefore, an encouraging link with the hypothetical model under
    consideration. However, more than this, we find that the energy density calculated from
    the above expression and using the value of the parameter Q/R2 derived for the Earth itself
    is 2.37×109 J/m3. The theory is therefore supported also by a quantitative connection with
    the geomagnetic field.

    3. Discussion

    The greatest puzzle of all concerning thunderballs is their ability to pass through solid
    matter and still preserve their form. This is explained by the above theory. As a
    phenomenon of electric strain in the vacuum itself, a strain which is primary and sustained
    by some inner mechanism of the vacuum state, the thunderball can pass through solid matter
    just as easily as solid matter can pass through the vacuum. What is seen of the thunderball
    is merely the ionization in the atmosphere resulting from the decay of the energy locked up
    in this state of strain. As the pocket of strain passes through solid matter any ionization on
    the entry side merely subsides to be replaced by ionization on the exit side once the sub-sphere surface of the thunderball emerges.

    Another property of these objects is that they would exhibit a magnetic field of the
    same order as the Earth’s magnetic field. This is quite small but, bearing in mind that the
    mass of the thunderball is that of the field itself and therefore negligible, it needs very little
    force to displace them. Accordingly, it becomes possible to explain why thunderballs can
    hover over the surface of an aircraft wing in flight without being swept away in the
    slipstream (Aspden, 1980). In separating from the conductive surface of the wing, eddy
    currents would be induced by weakening the flux linkage sourced in the ball. These would
    develop a magnetic attraction for the ball and resist its separation, so holding the ball for
    a period in the proximity of the aircraft.

    Connected as they are with dramatic and dynamic events such as lightning discharges,
    it may appear to be bold speculation to suggest that these glowing spheres really are
    manifestations of a quasi-electrostatic effect. Yet, as we have seen, their unusual properties
    can be explained on such a theory. Given data concerning the amount of energy released,
    the theory suggests that the size of these objects is then determined by the standard energy
    density already estimated. This means that even a small amount of energy released by a
    discharge that is quite weak could produce a tiny thunderball. Since the electric field
    gradient is the same at the surface of all such objects and this is sufficient to ionize large and
    easily visible objects, we can expect even the smallest to exhibit ionization as well.

    They become, therefore, a potential hazard where explosive and inflammable
    substances are present. They constitute an unexpected hazard because they have a
    durability and a mobility not shared by other electrical phenomena.

    They are so elusive in character that they may exist without having been noticed except
    as an apparent illusion. Yet the thunderball is unquestionably a real phenomenon and a
    dangerous one.

    In order to devise experiments by which thunderballs may be created and examined
    under controlled laboratory conditions, one needs at least to begin with a viable hypothesis
    as to their character. This has been offered in this paper. The theory presented should be
    judged in the light of the very great spectrum of theories proposed hitherto and discounted
    for many reasons. See, for example, the excellent review articles by Golde (1977) and
    Charman (1979). Of more practical concern on a grand scale are the efforts of Nobel
    laureate Kapitza (1979) who, recognizing that the energy densities of the thunderball are
    of the right order for application in fusion reactors, seeks to create them artificially by R.F.
    techniques, this mechanism being his assumption of how these objects may derive their
    energy.

    Finally, it is noted that the author has explored in considerable depth the possible
    physical basis of the underlying `vacuum spin’ on which the argument was developed
    (Aspden, 1980). It remains to devise and conduct experiments aimed at inducing this spin
    condition by using radial electric fields, so as to verify and perhaps apply the phenomenon
    to useful ends.

    References

    Altschuler M D et al 1970 Nature 228 545

    Aspden H 1980 Physics Unified (Sabberton: PO Box 35, Southampton) Ch. 3 & p 188

    Aspden H 1981 The Journal of Meteorology UK 6 258

    Charman W N 1979 Physics Reports 54 261

    Golde R H 1977 Lightning (London: Academic Press) v. 1 pp 409

    Graham G M and Lahoz D G 1980 Nature 285 154

    Jeans Sir James 1966 The Mathematical Theory of Electricity and Magnetism

    (Cambridge University Press) p 155

    Kapitza P L 1979 Rev. Mod. Phys. 51 417

    Rosser W G V 1968 Classical Electromagnetism via Relativity (Butterworths: London) Appendix 6 p. 285

    LISTING OF PUBLISHED WORK OF DR HAROLD ASPDEN

    In writing this Report I had occasion to refer to just a few of the various
    published papers I have written over the years and am mindful that I have
    been writing in a confident style, taking strength from my other related
    efforts on the creative properties of the aether. The Correa research
    findings have been my inspiration by opening the door giving access to
    that aether. It may be that the following list of my papers may serve as a
    partial index giving guidance as to what also lies a little further behind that
    door.

    [1] ‘The Law of Electrodynamics’, Journal of the Franklin Institute, 287, 179-183
    (1969).

    [2] [Jointly with D. M. Eagles] ‘Aether Theory and the Fine Structure Constant’,
    Physics Letters, 41A, 423-424 (1972).

    [3] [Jointly with D. M. Eagles] ‘Calculation of the Proton Mass in a Lattice Model
    for the Aether’, Il Nuovo Cimento, 30A, 235-238 (1975).

    [4] ‘The Fresnel Formula applied to Empty Space’, International Journal of
    Theoretical Physics, 15, 263-264 (1976).

    [5] ‘Inertia of a Non-radiating Particle’, International Journal of Theoretical Physics,
    15 631-633 (1976).

    [6] ‘A New Approach to the Problem of the Anomalous Magnetic Moment of the
    Electron’, International Journal of theoretical Physics, 16, 401-404 (1977).

    [7] ‘Electrodynamic Anomalies in Arc Discharge Phenomena’, IEEE Transactions
    on Plasma Science, PS-5, 159-163 (1977).

    [8] ‘Energy Correlation Formula Applied to Psi Particles’, Speculations in Science
    and Technology, 1, 59-63 (1978).

    [9] ‘Crystal Symmetry and Ferromagnetism’, Speculations in Science and
    Technology, 1, 281-288 (1978).

    [10] ‘G Fluctuations and Planetary Orbits’, Catastrophist Geology, 3-2, 1-2
    (December 1978).

    [11] ‘Ion Accelerators and Energy Transfer Processes’, U.K. Patent Specification No.
    2,002,953A (Published 28 February 1979).

    [12] ‘The Spatial Energy Distribution for Coulomb Interaction’, Lettere al Nuovo
    Cimento, 25, 456-458 (1979).

    [13] ‘Energy Correlation of Radiative Decays of psi(3684)’, Lettere al Nuovo
    Cimento, 26, 257-260 (1979).

    [14] [Jointly with D. M. Eagles] ‘The Spatial Distribution of the Interaction
    Contribution to the Magnetic-Field Energy Associated with Two Moving
    Charges’, Acta Physica Polonica, A57, 473-482 (1980).

    [15] ‘The Inverse-Square Law of Force and its Spatial Energy Distribution’, J. Phys.
    A: Math. Gen. 13, 3649-3655 (1980).

    [16] ‘UFOs and the Cosmic Connection’, Energy Unlimited, 8, 37-40 (1980).

    [17] ‘A Theory of Neutron Lifetime’, Lettere al Nuovo Cimento, 31, 383-384 (1981).

    [18] ‘Atmospheric Electric Field Induction’, Speculations in Science and Technology,
    4, 314-316 (1981).

    [19] ‘The Anomalous Magnetic Moment of the Electron’, Lettere al Nuovo Cimento,
    32, 114-116 (1981).

    [20] ‘Electron Form and Anomalous Energy Radiation’, Lettere al Nuovo Cimento,
    33, 213-216 (1982).

    [21] ‘A Theory of Pion Lifetime’, Lettere al Nuovo Cimento, 33, 237-239 (1982).

    [22] ‘The Correlation of the Anomalous g-Factors of the Electron and Muon’,
    Lettere al Nuovo Cimento, 33, 481-484 (1982)

    [23] ‘Mirror Reflection Effects in Light Speed Anisotropy Tests’, Speculations in
    Science and Technology, 5, 421-431 (1982).

    [24] ‘Charge Induction by Thermal Radiation’, Journal of Electrostatics, 13, 71-80
    (1982).

    [25] ‘The Aether – an Assessment’, Wireless World, 88, 37-39 (October 1982).

    [26] ‘Relativity and Rotation’, Speculations in Science and Technology, 6, 199-202
    (1983).

    [27] ‘The Lamb Shift for a Cavity-Resonant Electron’, Lettere al Nuovo Cimento, 36,
    364-368 (1983).

    [28] ‘The Thunderball – an Electrostatic Phenomenon’, Institute of Physics
    Conference Series No. 66: Electrostatics 1983, pp. 179-184.

    [29] ‘The Determination of Absolute Gravitational Potential’, Lettere al Nuovo
    Cimento, 37, 169-172 (1983).

    [30] ‘The Nature of the Muon’, Lettere al Nuovo Cimento, 37, 210-214 (1983).

    [31] ‘Theoretical Resonances for Particle-Antiparticle Collisions based on the
    Thomson Electron Model’, Lettere al Nuovo Cimento, 37, 307-311 (1983).

    [32] ‘Meson Lifetime Dilation as a Test for Special Relativity’, Lettere al Nuovo
    Cimento, 38 206-210 (1983).

    [33] ‘The Mass of the Muon’, Lettere al Nuovo Cimento, 38, 342-345 (1983).

    [34] ‘The Assessment of a Theory for the Proton-Electron Mass Ratio’, Lettere al
    Nuovo Cimento, 38, 423-426 (1983).

    [35] ‘The Finite Lifetime of the Electron’, Speculations in Science and Technology,
    7, 3-6 (1984).

    [36] ‘Electromagnetic Reaction Paradox’, Lettere al Nuovo Cimento, 39, 247-251
    (1984).

    [37] ‘The Muon g-Factor by Cavity Resonance Theory’, Lettere al Nuovo Cimento,
    39, 271-275 (1984).

    [38] ‘Boson Creation in a Sub-Quantum Lattice’, Lettere al Nuovo Cimento, 40, 53-57 (1984).

    [39] ‘The Steady-State Free-Electron Population of Free Space’, Lettere al Nuovo
    Cimento, 41, 252-256 (1984).

    [40] ‘Don’t Forget Thomson’, Physics Today, 15 (November 1984).

    [41] ‘The Nature of the Pion’, Speculations in Science and Technology, 8, 235-239
    (1985).

    [42] ‘The Maxwell-Fechner Hypothesis as an Alternative to Einstein’s Theory’, 8,
    283-289 (1985).

    [43] ‘Unification of Gravitational and Electrodynamic Potential based on Classical
    Action-at-a-Distance Theory’, Lettere al Nuovo Cimento, 44, 689-693 (1985)

    [44] ‘The Paradox of Constant Planetary Mass as Evidence of a Leptonic Lattice-Structured Vacuum State’, Lettere al Nuovo Cimento, 44, 705-709 (1985).

    [45] ‘The Exploding Wire Phenomenon’, Physics Letters, 107A, 238-240 (1985).

    [46] ‘A New Perspective on the Law of Electrodynamics’, Physics Letters, 111A, 22-24 (1985).

    [47] ‘Theoretical Evaluation of the Fine Structure Constant’, Physics Letters, 110A,
    113-115 (1985).

    [48] ‘The Proton Enigma’, American Journal of Physics, 53, 938 (1985).

    [49] ‘More on Thomson’s Particles’, American Journal of Physics, 53, 616 (1985).

    [50] ‘Weak Violation – a New Concept in Relativity?’, Nature, 318, 317-318 (1985).

    [51] ‘Anomalous Electrodynamic Explosions in Liquids’, IEEE Transactions on
    Plasma Science, PS-14, 282-285 (1986).

    [52] ‘How to Test Special Relativity’, Nature, 321, 734 (1986).

    [53] ‘Classical Relativity’, Nature, 320, 10 (1986).

    [54] ‘Electron Self-Field Interaction and Internal Resonance’, Physics Letters, 119A,
    109-111 (1986).

    [55] ‘The Mystery of Mercury’s Perihelion’, The Toth-Maatian Review, 5, 2475-2481
    (1986).

    [56] ‘Flat Space Gravitation’, Physics Education, 21, 261-262 (1986).

    [57] ‘Fundamental Constants derived from Two-Dimensional Harmonic Oscillations
    in an Electrically Structured Vacuum’, Speculations in Science and Technology,
    9 315-323 (1986).

    [58] ‘The Theoretical Nature of the Neutron and the Deuteron’, Hadronic Journal,
    9, 129-136 (1986).

    [59] ‘Meson Production based on Thomson Energy Correlation’, Hadronic Journal,
    9, 137-140 (1986).

    [60] ‘An Empirical Approach to Meson Energy Correlation’, Hadronic Journal, 9,
    153-157 (1986).

    [61] ‘The Exploding Wire Phenomenon as an Inductive Effect’, Physics Letters,
    120A
    , 80-82 (1987).

    [62] ‘Earthquake-related EM Disturbances’, Quarterly Journal of the Royal
    Astronomical Society, 28, 535-536 (1987).

    [63] ‘The Physics of the Missing Atoms: Technetium and Promethium’, Hadronic
    Journal, 10, 167-172 (1987).

    [64] ‘Synchronous Lattice Electrodynamics as an Alternative to Time Dilation’,
    Hadronic Journal, 10, 185-192 (1987).

    [65] ‘Instantaneous Electrodynamic Potential with Retarded Energy Transfer’,
    Hadronic Journal, 11, 307-313 (1988).

    [66] ‘The Theory of the Proton Constants’, Hadronic Journal, 11, 169-176 (1988).

    [67] ‘A Theory of Proton Creation’, Physics Essays, 1, 72-76 (1988).

    [68] ‘Do We Really Understand Magnetism?’, Magnets, 1, 19-24 (1988).

    [69] ‘The Vacuum as our Future Source of Energy’, Magnets, 3(8), 15-18 (1988).

    [70] ‘Conservative Hadron Interactions Exemplified by the Creation of the Kaon’,
    Hadronic Journal, 12, 101-108 (1989).

    [71] ‘The Theory of the Gravitation Constant’, Physics Essays, 2, 173-179 (1989).

    [72] ‘A Theory of Pion Creation’, Physics Essays, 2, 360-367 (1989).

    [73] ‘The Supergraviton and its Technological Connection’, Speculations in Science
    and Technology, 12, 179-186 (1989).

    [74] ‘Standing Wave Interferometry’, Physics Essays, 3, 39-45 (1990).

    [75] ‘The Harwen Energy Radiation Regenerator’, Speculations in Science and
    Technology, 13 295-299 (1990).

    [76] ‘Maxwell’s Demon and the Second Law of Thermodynamics’, Nature, 347, 25
    (1990).

    [77] ‘The Theory of Antigravity’, Physics Essays, 4, 13-19 (1991).

    [78] ‘Magnets and Gravity’, Magnets, 6(6), 16-22 (1992).

    [79] ‘Electricity without Magnetism’, Electronics World, 540-542 (1992).

    [80] ‘Switched Reluctance Motor with Full A.C. Commutation’, U.S. Patent
    4,975,608 (4th December 1990).

    [81] ‘Thermal Power Device’, U.K. Patent Specification 2 239 490A (Published 3rd
    July 1991).

    [82] ‘The Law of Perpetual Motion’, Physics Education, 28, 202-203 (1993).

    [83] ‘The First Law of Thermodynamics’, Physics Education, 28, 340-342 (1993).

    [84] ‘Retardation in the Coulomb Potential’, Physics Essays, 8, 19-28 (1995).

    [85] ‘Space, Energy and Creation’, Sabberton Publications, P.O. Box 35,
    Southampton SO16 7RB, England.

    The latter item [85] was a privately published paper as distributed on the occasion of
    an invited lecture delivered by the author to the Physics Department, University of
    Cardiff in Wales in 1977.

    __________

    This report was first issued on 26th February 1996 by a private
    arrangement with Dr. Paulo Correa. It was updated and reissued for
    publication in its present form on 31st July 1996, the date of publication of the author’s ‘Aether Science Papers’.