It is evident that most events display a certain amount of predictability, and that the more closely we investigate, the better that predictability becomes. Simple systems, like those in physics experiments (the most predictable), more complex ones like chemical or biological systems, and finally psychological systems, all appear to follow well-defined laws. It is generally assumed that large, complex, relatively unpredictable systems like geological and meteorological ones are difficult to analyze mainly because of their complexity, and will become more predictable when we know more about them. This assumption is gradually being borne out. In short, the natural world is built upon order.
The fact is that the existence of science argues strongly for an  ordering principle in the universe, which we may call God,1
|1I think this is the most natural definition of God. This definition would account for the use of the word God by many scientists (including Einstein). Notice that the definition does not presume a God Who deserves worship, a personal God, a God Who takes cognizance of us, or even a single God, although of course it includes such a God. The position of scientific materialism, or scientism, is that the ordering power or principle of the universe takes no notice of our thought processes except insofar as they influence the atoms and ions in our brains. There is still an ordering power of the universe. Thus scientism has a god. However, since that god is indistinguishable from the universe and takes no direct notice of us, for practical purposes scientism may be said to be atheistic.
The distinction between an ordering principle and an ordering power is easy to make in the case of an automobile or computer, but becomes more difficult to visualize at the atomic level, and probably ceases to exist at the subatomic level.
The success of science has been used to support arguments for two different theologies which deny the possibility of the supernatural. In theory they are exact opposites. To use theological terms, in one God is completely immanent, and in the other He is completely transcendent. That is, in one system (pantheism) the ordering principle of the universe is inherent in, and inseparable from, nature itself. Scientific materialism is a form of pantheism. In the other (deism) the ordering principle is totally separate from the universe, and having once started it going, has allowed it to run on its own without any outside interference.
In theory these two views are distinct, but I know of only one way to distinguish between the phenomena they predict, which we will come to shortly and which has little impact on our lives now. It does not really matter whether the ordering principle is part of nature or separate from nature, if in fact you cannot observe anything other than nature. Therein lies both the attraction and the difficulty with both systems. The systems have a certain simplicity which is one element of elegance: All phenom-ena can be explained by natural law alone. It cannot be denied that a similar premise has done much to advance science. Pantheism is the simpler of the two, and because of this, has often been the preferred model for what may be called empiric naturalism, the belief that the universe is run by laws that do not take factors other than physical ones into account and do not change.2
|2There are a number of theologies or philosophies which have empiric naturalism as part of their foundation. There is, of course, scientific materialism, but there is also the idealism of Kant and that of Hegel, as well as Marxism and the skepticism of Hume and the pantheism of Spinoza. Some of these philosophies would vigorously protest being confused with scientific materialism. However, in terms of the predictions they make about phenomena, there is little or no difference between them, and therefore scientific theology is justified in considering them together.|
However, there are certain facts which cannot fit into empiric naturalism. These facts come from the core of disciplines like cosmology, atomic physics, and biochemistry. I will outline the facts below. They are surprisingly coercive. Therefore it is more reasonable to believe in a God Who changes the natural order of things on occasion than it is to believe in one who cannot or will not. Agreement with the facts takes precedence over any other property of a theory.
The Big Bang
The first subject we will discuss has to do with the only difference between pantheism and deism. Pantheism (often called atheism)3
|3Although I believe that technically this is a misnomer for reasons outlined in note 1 above.|
For one who wants a simple picture of the universe, pantheism is the most attractive option. For some time it appeared to be the majority position of scientists, and indeed it may have been. But lately it has been all but abandoned by physicists and astronomers, largely because of the Big Bang theory. This startling development deserves some review.
Since the days of Galileo and Copernicus, scientists have become increasingly aware that the universe appears to be huge. First, the size of the solar system was accurately measured. Then the shift of nearby stars against more distant background stars between summer and winter (or spring and fall) was measured, and their distance from the solar system calculated on the basis of the angles they formed with the earth’s orbit around the sun. Then most of these stars were observed to form a pattern, called the main sequence, when their presumed actual brightness (calculated from their apparent brightness and their presumed distance from Earth) was plotted against their temperature as determined by their spectra. The Milky Way was noted to be made up of stars, presumably at great distances from us. Then other galaxies were noted to be made up of millions of stars as well. Some stars, called Cepheid variables, were found to pulsate at a regular rate that was proportional to their absolute brightness (this was first noted on the Magellanic clouds, where all the stars are the same distance away for practical purposes). The absolute brightness of Cepheid variables was determined statistically using their radial motions (by doppler measurements), their proper motions, and their apparent brightness. Finally, if Cepheid variables and main sequence stars were assumed to have the same absolute brightness in other galaxies, scientists could calculate how far away these galaxies were, and the boundaries of the visible universe were pushed back billions of light-years.
Scientists were interested in which way all these stars were moving. Lateral motion was hard to determine, but movement toward or away from us was relatively easy to determine. If a star was moving toward us, its light would have a higher frequency than expected. The blue would turn to violet or ultraviolet, the red would turn to orange or yellow, and the infrared would turn to red, depending on the speed of the star toward us. The reverse would happen if the star was moving away from us (it is the same principle that causes train whistles to seem higher in pitch when they are approaching and lower when they are receding from us). By itself this change could not be accurately observed, but if there were special dark (absorption) or light (emission) lines in the spectrum, caused by different elements like hydrogen or calcium in the star’s gases, these lines would also be shifted and their shifts could be measured. If they shifted toward  the violet, the star was approaching us. If they shifted toward the red, the star was receding. When Doppler measurements were made on galaxies, an astronomer named Slipher noted that a number of galaxies seemed to be receding from us at rates up to two million miles an hour. Edwin Hubble systematically catalogued galaxies according to their brightness, and therefore rough distance from us, and noted that they were nearly all receding from us, and that their light was shifted more toward the red the further away they were. He proposed Hubble’s law, which stated that the further away a galaxy is, the faster it is moving away from us and thus the more its light is shifted toward the red. In spite of initial resistance from the scientific community this law has now been accepted. The universe is now generally conceded to be expanding.
This led to two different explanations for the universe, the Big Bang theory and the Steady State theory. It turned out that galaxies were spread nearly equally in all directions, except where the Milky Way obscured them. We had already been proven not to be at the center of the earth, the solar system, or even our galaxy, so neither model was willing to say that we are at the center of the universe. One model of the universe said that it should always look the same from everywhere and at all times (the “perfect cosmological principle”). This required that galaxies be created to fill in the gaps between other galaxies as they moved apart. It also implied that we should see burned- out galaxies which were much older than ours, just as our galaxy would eventually be burned out and be viewed by possible observers on other galaxies. Finally, no matter how far you looked, all galaxies must look pretty much the same.
This model had one disconcerting feature. Matter must be continually created out of nothing in order to keep the model viable. There was nothing to keep an earth, for example, from being created out of nothing. This was very uncomfortable for scientists who didn’t want an outside God meddling with the universe. They solved this by proclaiming that newly created matter had to be put into the universe as hydrogen, and evenly spaced (it does not take much hydrogen–only one thousand atoms per cubic kilometer, or 4200 per cubic mile, per year–far too small to measure).
But the steady state theory has largely been abandoned, and for three reasons. First, and weakest, the galaxies furthest away  from us (and therefore further back in time) seem to have slightly more of a red shift than you would expect from their apparent distance. That is, the galaxies furthest away from us seemed to be receding faster. This was contrary to the perfect cosmological principle and therefore the steady state theory, but in accord with the Big Bang theory which predicted a faster expansion earlier (before gravity had a chance to slow it down). Second, there was the discovery of radio galaxies and quasars. These objects did not compare with anything closer to us. Although they were not predicted by the Big Bang theory, they are compatible with it, but not with the perfect cosmological principle. Finally, the Big Bang theory predicted the existence of radiation from the original Big Bang, red-shifted to the microwave range, which was independently discovered by two radio astronomers named Penzias and Wilson. There was no way to account for this background radiation using the steady state model.4
|4A good non-technical survey of the relevant information and concepts can be found in Asimov I: The Universe: From Flat Earth to Quasar. New York: Walker and Company, 1966.|
|5The concept of an initial uniform expansion has recently come under fire, and some may misinterpret this to mean that the Big Bang is in trouble. In one sense it is, as the Big Bang cannot easily account at present for a “lumpy” universe, but in another sense it is not, as projecting the positions of galaxies backward in time still leads to a point.|
So at this time the Big Bang theory reigns supreme.5 But this means that the universe had a beginning, and there is no way to get back before that time. Matter is not eternal. At the maximum it was created about 19 billion years ago, and we cannot make even an educated guess about what happened before. Furthermore, the laws of physics break down at that point, for the natural prediction we would make of the universe is that at the instant of its creation, if it did start as a point, it should have turned into what is known in astronomy as a black hole, with too much gravitational pull for even light to escape.
Perhaps the best attempt to explain how the universe could have been created without violating the law of gravity (or more accurately general relativity) has been done by Stephen Hawking.6
|6Outlined in A Brief History of Time. New York: Bantam Books, 1988.|
Many scientists have been very uncomfortable with this conclusion, and have seemed to have reacted from their hearts rather than from their heads, as documented by Robert Jastrow (himself an agnostic) in God and the Astronomers.7
|7New York: W. W. Norton and Company, Inc., 1978.|
So the second thing that we can learn from nature is that it was created, in all probability by the God who gives it order. Pantheism will not do. We have at least to be deists if we are to follow the evidence.
But it turns out we cannot even be pure deists. We must accept the fact that God did not merely set the universe in motion and leave it to run on its own. He is actively coordinating what goes on now. For quantum theory (and experimental evidence) is incompatible with a universe where particles merely react with one another. 
The usual picture of the universe given by believers in scientism was objective, and thus the same to all observers, completely regular and thus predictable if part of it is known, and in which information can only travel by particles and perturbations of those particles which are limited to the speed of light. The last condition is a requirement if no effects exist that are caused by something outside of nature. This picture is incompatible with quantum theory. And some of the experiments to test whether quantum theory or this kind of a universe explains nature better have been done, and the results have confirmed quantum theory.8
|8For those wanting to look at most of the original papers, they are conveniently collected in Wheeler JA, Zurek WH, eds: Quantum Theory and Measurement. Princeton, Princeton University press, 1983. A more popular treatment may be found in Herbert N: Quantum Reality. Garden City, NY: Anchor Press, 1985, or Casti JL: Paradigms Lost: Images of Man in the Mirror of Science. New York: William Morrow and Co., 1989.|
|9Related to f by the equation k=f/c, where c is the speed of light.|
But that does not mean it has not proved strange. For example, take a black and white television screen. This is made up of a source of electrons (a hot negatively charged wire), some hollow metal cylinders which are charged so as to accelerate the electrons, some deflectors to aim the beam, and a screen which lights up each time it is struck by an electron. Now let us ignore the deflectors and instead aim our beam at a plate with a hole in it. At first the beam goes right through the hole without noticeable change and makes a bright spot on the screen. Then if we make the hole smaller the bright spot grows smaller, as stray electrons are blocked. But as the hole is made still smaller, something strange happens. The bright spot starts to enlarge in a peculiar way. It makes a pattern of concentric rings, called an Alry pattern (named after George Airy who first described it in 1835).
But now let us enlarge the hole again until we have our Alry pattern, then turn the beam intensity down. If we look at the screen we can see individual flashes of light, more in the center and less at the edges, and practically none in the dark rings. Here we can see the individual electrons as they arrive. But they act like they are guided by waves. Perhaps, we think, the waves are like sound waves, dependent on large numbers of particles. So we turn down the intensity until we see a flash here, then another one there, seemingly scattered randomly across the screen. We adjust the screen so that only about one flash a minute gets through. That should take care of any effect that takes large numbers of particles. Then we put some photographic film right next to the screen, and go away and leave the apparatus for a week. When we come back we develop the film and guess what! We have an Alry pattern.
This experiment illustrates several facets of quantum theory. First, very small objects which are indisputably particles have attributes usually attributed to waves.10
|10The converse is also true. We could have performed a similar experiment with light. In fact, the Airy pattern was originally used to prove that light was made up of waves. Yet light behaves like particles when it is sent and received.|
A second set of experiments might be called the ghost pathway experiment. We can set up a half-silvered mirror at a 45° angle to a beam of light. This splits the beam into two roughly equal beams, one going to mirror B and one to mirror C. These  beams can then be reflected toward mirror D which is also half-silvered, and they will interfere with each other. In this particular case they will both join to strike counter 1 (A slight adjustment of the mirrors will cause them to strike counter 2).
Now if we put an object in either of the pathways, we will get counts in both counters 1 and 2. If the object in the pathway is a counter (say counter 3 in the pathway from mirror B to mirror D), we will get two counts in counter 3 and one count in counter 2 for every count in counter 1, on the average. The same thing happens if we put counter 4 in the C-D pathway. In this part of the experiment the photon behaves as if it were a little particle (or wave packet) which takes one or the other pathway. But if we keep counters 3 and 4 out of the way all the photons again go to counter 1, as if they knew somehow that both paths were open to them. If we think of a photon as a wave packet, how does the photon know that the path it didn’t take (which may be 3 feet or 3 miles away) was open? Or if we think of the photon as a wave, why is it not counted roughly simultaneously in, say, counters 4 and 1 or 2?
Some of you are tempted to shake your heads and say, “This must be science fiction.” I am sorry, but the above experiments  are sober fact. Others may ask, “How can it be this way?” The answer is, nobody knows. There are mathematical models which successfully predict the results of this experiment. But no one that I know of has been able to postulate a model that we can visualize and that still turns out to explain the data.
But we are only warming up. Suppose we take the mirror D out of our apparatus (and leave counters 3 and 4 out). Then half of the photons will go to counter 1 and half will go to counter 2. If our apparatus is sufficiently large (or we are able to move the mirror fast enough), we can actually decide after the photon is in our apparatus whether it will act as if it traveled both pathways (and always is counted in counter 1), or whether it will act as if it traveled only one pathway (and strikes counters 1 and 2 with equal frequency but not simultaneously). That is, in a manner of speaking, our actions now will determine what happened in the past.11
|11See, for example, Alley CO, Jakubowicz O, Sleggerds CA, Wickes WC. “A Delayed Random Choice Quantum Mechanics Experiment With Light Quanta.” In Kamafuchi S et al. (eds), Proceedings of the International Symposium: Foundations of Quantum Mechanics in the Light of New Technology, Tokyo, 1983. Tokyo, Physical Society of Japan, 1984, pp. 158-64, Alley CO, Jakubowicz 0, Wickes WC. “Results of the Delayed Random Choice Quantum Mechanics Experiment With Light Quanta and Proposal of a New Type of EPR Experiment Using Light Quanta Produced by a Nonlinear Optical Process.” In Namiki M et al. (eds), Proceedings of the 2nd International Symposium: Foundations of Quantum Mechanics in the Light of New Technology. Tokyo, Physical Society of Japan, 1987, pp. 36-52, and Schleich W, Walther H. “Single-Atom and SinglePhoton Experiments.” In Namiki M et al., ibid., pp. 25-35.|
Most of us grew up in an environment where science was thought to have nearly reduced everything to particles moving in gravitational and electromagnetic fields: Science could reasonably be expected to complete that reduction (except for miracles for those who believed in them) in the near future. But if one listens to quantum physicists one hears weird statements like “There is no deep reality” or “The observational act determines reality” or “The moon isn’t really there unless you look at it” or “Consciousness determines reality”. At first we are tempted to dismiss such statements as those of some crazy people or embittered anti-scientific philosophers. But these are Nobel prize winners in science. And the reason why they are saying such things is that they are trying to make sense of the most scientific discipline of them all. 
There are other strange aspects of quantum theory. Some photons behave like they came from the entire surface of a star which may be more than 100 million miles across (this effect is used in stellar interferometry; it cannot simply be two photons interfering, or the phase relationship would be inconstant and therefore the interference pattern would be lost), and the photons may be hundreds of feet wide or even more. And some photons should interfere with themselves even though the two pathways they traveled mean that the two parts of the interfering photons are separated by millions of light-years.
The next set of experiments we will review was suggested by Einstein and two of his colleagues, Boris Podolsky and Nathan Rosen, in 1935. Einstein was profoundly unhappy with Neils Bohr’s interpretation of quantum mechanics, which came to be thought of as the orthodox interpretation. It stated that objects like electrons and photons did not really have attributes like position, momentum, or energy, but only probable ranges. Quantum theory was interpreted as a description of probability waves, and quantum objects had no more reality than that. Thus, where an electron struck a screen was determined by chance, almost like rolling dice. (Actually the dice are more deterministic–if we knew their speed, rotation, distance above the table, and the hardness and resiliency of the table more accurately we could probably predict the value the dice would give more accurately. We cannot predict the position a given electron will show on a screen any more accurately than the probability wave, no matter how hard we try.)
As Einstein stated, “I cannot believe that God plays dice with the universe.” He spent a great deal of time trying to construct idealized experiments (or “thought experiments”) which would prove that conjugate variables such as position and momentum, or energy and time, could be measured simultaneously to an arbitrary accuracy (or at least less than h for their product). He failed every time. Finally he proposed that there had to be “hidden variables” which perhaps could not be measured, but which would give the “real” position, momentum, etc. of a quantum object. He and his two colleagues stated that if you knew something would be true about a system before you measured it, then it must be real. They noted that if two quantum objects (protons, electrons, photons, etc.) were brought in very close proximity and then allowed to separate, a property called spin axis would al-ways be found to be pointed in opposite directions on the two particles when measured. Thus they felt that these objects had to have real spin axes that were simply unknown at the beginning of the experiment.
If we set up the experimental apparatus shown in the diagram, according to quantum theory, each pair of photons given off by a mercury atom will have opposite spins, 180 degrees apart. That means they will appear to have the same polarization to our analyzer (The analyzer cannot tell the difference between “up” and “down” photons. Both register as “vertical”). So if we run a tape of our counters we might see a recording as follows:
The conventional interpretation of quantum theory said that an unpolarized photon “decides” at the polarizing device, in a random manner, which way it will be polarized. Einstein said no: The photon is actually already polarized, and the polarizing device merely reveals the direction of its pre-existing polarization. He said that otherwise we could not explain why our 2 photons were polarized in parallel, because the measurements are done independently, and signals from one measurement, travelling at the speed of light, can’t influence the other measurement.
The Einstein-Podolsky-Rosen (or EPR) experiment was the last gasp of what is now called naïve realism. The arguments from the experiment have been completely nullified, and in fact reversed. But before we see how that happened, we should first note the relationship between naïve realism and the idea of causality.
The idea of causality is instinctive to humans. A footprint is found; something must have caused it. A ripple is seen on a lake; something must have disturbed the water. A man gets sick; something must have made him sick. A woman has a baby; something must have caused her to have a baby.
The first two examples are rather straightforward. With experience and careful observation we can determine the causes of the footprint and the ripple, with a high degree of agreement between experts, and of experts with previous observers of the events. Biology and chemistry, on the other hand, do not lend themselves to analysis so easily. The role of sexual intercourse in causing childbirth is not obvious unless someone points it out. The results come much later, the cause is not usually seen in public, and the results do not always follow the cause. As a result, some societies have not discovered (or have lost) the idea of the connection between the two. In these societies, however, children are thought of as having some other cause. For disease, the picture is even more complex. There are multiple real causes of disease, many of them either microbes which can’t be seen, or emotions which can’t be found physically and which may be explicitly denied. Thus the causes the ancients gave for disease were many all the way from bad air (in Latin mal aria) to demons to punishment from God. Some things did make sense: Syphilis was caused by sexual intercourse with someone  who had it, Plague was caused by proximity to someone who had it, poison can kill, and living around swamps was unhealthy. As swamps were drained and the number of new malaria cases went down, we could see the emergence of a primitive science and technology.
Science entered the picture in a big way with the work of Isaac Newton. He postulated his famous laws of motion. 1. A body tends to continue both its speed and direction of motion (or remain at rest) unless acted on by an extrinsic force. 2. Any change in speed or direction of motion is proportional to the force applied and inversely proportional to the mass of the object 3. For every force exerted by one object on another, there was an opposite but otherwise equal force exerted by the second object on the first. Then he postulated the law of gravity. Each mass exerts an attractive force on every other mass proportional to the product of the masses and inversely proportional to the square of the distance between them. Suddenly, the motion of objects could be accounted for using only the direct pushes and pulls they gave to each other, and this mysterious force called gravity, about which Newton said, “I frame no hypotheses.”
The kinetic theory of matter further extended this picture to liquids and gases. Chemistry was soon absorbed into this system. Biochemistry followed suit, and more and more biologists assumed that biology would also soon be explained, along with behavioral science. Electricity, then magnetism required the introduction of fields, but these were fairly easily incorporated into the mechanical view of the universe, and in fact pointed the way to a solution to the mysterious gravitational field, in the form of Einstein’s theory of general relativity. At this point the triumph of the mechanical view of the universe seemed complete, with only mopping-up exercises left to do.
Implicit in this view is the idea of cause and effect. A force is applied, so an object (be it an electron, an atom, an arm, a car, or the moon) changes its motion. Implicit also is the idea of time. An object cannot change its motion before the force is applied. Cause always comes before effect in time.
Thus when Einstein presented his theory of special relativity, it caused great consternation, partly because the theory held that for what used to be considered simultaneous events widely separated in space, the words “before” and “after” were meaningless in an absolute sense. Which event was first depended on  which way you were travelling, and how fast. This would destroy the possibility of instantaneous action at a distance, on which the theory of gravitation rested. Physicists found this idea very distasteful, but were forced to swallow it by the logic of the theory of relativity and its close fit with observations.
In the place of the usual concept of simultaneity as the absolute boundary between cause and effect, special relativity introduced the concept of the “absolute elsewhere”. Briefly, the speed of light in a vacuum is the absolute speed limit of the universe. If light from one event has time to reach another, then the first event is in the “absolute past” of the second, and the second event is in the “absolute future” of the first, and the first event can (partly or completely) cause the second event, but not vice versa. All observers will agree on which event is first. However, if the events are close enough in time and far enough apart so that light from either event cannot reach the other before it occurs, then the events are in each other’s “absolute elsewhere”. Neither event can cause the other. Therefore, even though observers travelling at different velocities may disagree on which event happened first, no one would argue that something which happened second caused something which happened first.
It might also be observed that since nothing except light can travel the speed of light, time must always travel forwards for any ordinary object. This includes people. Thus there are limits to the warping of time with special relativity, and for ordinary purposes time can be considered an independent variable just as Newton considered it. Note that Einstein’s interpretation of the EPR experiment is heavily dependent on this idea of the absolute elsewhere. If the two photons (or protons or whatever) are measured, then the only ways they can match each other are if one’s measurement caused the other’s measurement, or if they both had a real property which was merely revealed by the device. If they are measured in each other’s absolute elsewhere and they still match, then there must be some kind of real property that they share.
The downfall of naïve realism started when Niels Bohr showed that the orthodox view of quantum theory could account for the EPR experiment, although the traditional idea of causality had to be sacrificed. It was helped along by David Bohm, who finally developed a theory in which each particle (electron, proton, photon, etc.) had a definite position and momentum, but was influ-enced by the measuring device in such a way that the particle exactly imitated quantum theory. This theory had long been a goal of Einstein, but had been thought to be impossible. The problem with the theory was that it required the particles to sense changes in the measuring devices at speeds greater than that of light, thus solving quantum theory “in the way Einstein would have liked least.”
Finally John Bell proposed a mathematical model which made certain requirements for naïve realism, which conflicted with the predictions of quantum theory. These have been tested, and quantum theory has been vindicated. The concept is simple enough, and important enough, that we will again use an example.
Suppose we take our EPR apparatus and turn one of the analyzers on its axis, so that instead of lying in the plane of the paper the “perpendicular” photons now lie in a plane at some angle to the paper. If the angle is 90°, the functions of the counters are simply reversed, and we have another version of the original EPR apparatus. But if the angle is less, we have a situation where some of the photons will match one way and some will match the other. And we can say something about the way they match.
If Einstein is correct, and each photon has a “real” spin, then if we turn one analyzer through a small enough angle, we might see a record in which, say 25% of the records do not match. So the record might look like this:
Quantum theory, on the other hand, has a straightforward prediction for what the match for a given angle should be. In this situation it is cos2 , where is the angle one analyzer makes with the other (regardless of which analyzer moved–or whether both of them did). Thus at an angle of 30° there should be a 25% disagreement between the two analyzers, and at 60° there should be a 75% disagreement (the worst violation of Bell’s Theorem is actually at 67° and 23°, but the disagreement exists for every angle except 45°). This clearly violates the conditions required by any theory of localized reality. Quantum theory is incompatible with any local real theory of the universe.
As you might expect, several scientists have tried this experiment with various refinements. The results are compatible with  quantum theory, and extremely unlikely to be compatible with the requirements of local real theories. They even switched the angles of the analyzers while the photons were in flight and the statistics still agreed with quantum theory and disagreed with classical theory. Strictly speaking, the above experiments are statistical, and there is perhaps a billionth chance that they could have come out that way by the luck of the draw, but no physicist that I know of is betting on it. And there is another thought experiment I know where local reality and quantum theory are diametrically opposed,12
|12 Greenberger DM, Horne M, Zeilinger A: “Going beyond Bell’s Theorem.” In Bell’s Theorem, Quantum Theory and Conceptions of the Universe. Kafatos M, ed. Kluwer Academic Publications, Dordrecht, The Netherlands 1989, p. 69. See also Mermin ND. “What’s Wrong With These Elements of Reality?” Physics Today 43(6):9-11, 1990.|
What do physicists make of all this? They basically fall into two camps. On one side are those who follow David Bohm’s lead, and believe that there is a reality to every property of an electron or proton or photon, but that influences travel faster than light, which means that the traditional idea of causality must be abandoned as an absolute, just as the traditional idea of time and space (and mass) as absolutes was abandoned with Einstein’s theories of special and general relativity.
Others go much farther. Some will say that a quantum doesn’t have a real position or momentum, but only ranges which can be predicted by its quantum waveform. Thus they are forced to say that as a quantum wave approaches a position-sensing device such as a phosphor screen, as it is sometimes put, “a miracle happens.” The waveform collapses. This is caused partly by the quantum and partly by the measuring device. Thus a quantum’s position is not real until it is measured. But what is so special about measurement that can create reality of one kind and not another?
Niels Bohr would say that there is no deep reality. All we have is measurements. Perhaps that is true. But then why do the measurements correlate? Perhaps nature itself is not real. But something behind it, that sustains correlations, is. There must be some deep reality. 
There are those who say that consciousness creates reality. But then why should different consciousnesses create the same reality so much of the time? Does this imply an ultimate Consciousness that correlates the other consciousnesses? There are previously agnostic scientists who think that it does.
Some would say that the measurement act creates reality or selects from different kinds of potential reality. But why should it create one kind of reality and not another? In an effort to get away from this problem it has even been proposed that both realities are created, that every time a quantum measurement is made, two universes are created simultaneously. But what makes quantum measurement so special that it can create universes when other processes can’t? And how many universes does it take? (Remember that there are millions of positions that each electron may strike on our tube, and millions of electrons that strike the tube every second that you watch. The numbers rapidly get out of hand.) And that still doesn’t answer the question, why are we in the universe we are in? For even if there are multiple universes we only experience one of them, and that fact alone demands explanation.
The opposite tack has been taken by some who say that the universe is an undivided whole. This view has trouble explaining the apparent divisions. But perhaps the organizing power behind the universe is an undivided whole. Some have even suggested abandoning the basic logical principle that a statement cannot be both true and false in the same sense at the same time (Aristotle’s law of the excluded middle) as it applies to quantum objects. But this law would seem to be necessary if one is to say anything intelligible. No satisfactory substitute for this law has been proposed. And David Bohm’s theory demonstrates that it is not necessary to abandon normal logic when dealing with quantum objects.
Even with the most conservative interpretation of quantum theory, there are correlations between events where neither event can cause the other, and yet the particles do not carry the correlation with them. Since there are correlations, something does the correlating. It is not communication between the particles. It is not an inherent property of the particles themselves. It is therefore not bound by our normal laws of causality. We are brought face to face with the organizing power of the universe in a direct  way. God is acting in the here and now.13
|13Strictly speaking, we can only say that a god is acting in the here and now. But it is simplest to assume the identity of this God with the One Who organizes the universe and Who started it. Ockham’s Razor puts the burden of proof on those who would differentiate the two.|
Some will say that this is only according to quantum theory. No, the above is true even if quantum theory is superseded. The above experiments are valid even if our explanation for them has to be changed. It still is true that photons and protons have spin correlations at great distances, and that these correlations are not entirely carried by the particles themselves or communicated by other particles travelling at or below the speed of light. The introduction of particles travelling faster than light will not help, because then the question becomes, Which way does the communicating particle travel? To at least some observers, the communication is instantaneous. Nature cannot do this and maintain causality. Only a God outside of nature can make the experiments come out right.
So the third thing that we can learn from nature is that God is intimately involved in running it on a day-to-day basis.14
|14According to this analysis of quantum theory God is involved in much more than just these experiments. All quantum phenomena involve influences travelling faster than light, including light, so God is involved in literally all that we see. He is also involved in all molecular bonding. That doesn’t leave much out of His domain.|
The Origin of Life
There is a final fallback position of scientism. Perhaps God got the universe started, and perhaps He is intimately involved with its day-to-day operation. But quantum theory predicts that the tiny variations we have noted above cannot be used by us to convey messages faster than light, and so there are still no macroscopic breaks in the cause-effect chain. There are no large-scale miracles. God will still not step in and make a direct change in the large-scale operation of the universe. For practical purposes, the universe still runs on automatic.
Unfortunately, this position has no explanation for the origin of life. Everyone, from literal creationists to believers in scientism,  believes that life did not originate at the time the universe was created. Perhaps it was two days later, perhaps eighteen billion years later, but it was later. So from the point of view of the believer in scientism, life must have arisen by natural processes. But there are no natural processes that can account for its origin.
As an undergraduate student in chemistry, I had learned about the Miller-Urey experiment, and the theory that life had evolved in the sea as amino acids combined to form proteins, and nucleic acids combined to form DNA and RNA, then supramolecular assemblies, and finally cells. I was also aware of the theological implications of the theory. So I decided to research the subject and give my senior chemistry seminar on it. I was expecting to find some room for doubt regarding the theory. But I was totally unprepared for the one-sidedness of the evidence I found.
Chemical evolution requires four steps.
1. The basic organic building blocks have to be made from readily available compounds such as water, atmospheric carbon dioxide, nitrogen, sulfides and/or sulfates, and possibly methane and ammonia.
2. The building blocks must be joined into long chains with some kind of order.
3. The chains must be assembled into supramolecular assemblies.
4. These assemblies must start working together to perform the functions necessary for life.
According to theory all these steps had to be accomplished by random processes, as nature obviously did not manufacture raw materials, transfer them to test tubes, allow them to react, and purify the products according to a predetermined plan before proceding to the next step.
For those who are a bit rusty on their biochemistry, a short review of the biochemical basis of life follows:
1. Simple building blocks. For proteins (the most studied molecules), the basic building blocks are amino acids, or more properly alpha-amino acids. These have the general structure shown below (with the exception of proline, which is cyclic).
They all are capable of coming in two forms (the L- and D- forms), which relate to each other in much the same way as our left hand relates to our right hand. The amino acids found in the proteins in living organisms are all “left-handed” (the L- form). Amino acids made without special asymmetric catalysts such as enzymes (which have not been found in nature except in associa- tion with living or previously living organisms) are always a mixture of half left-handed and half right handed forms.
In addition, fatty acids are necessary for life. They are used in forming membranes. They have a long hydrocarbon chain attached to a carboxylic acid group. An example is palmitic acid, CH3CH2CH2CH2CH2CH2CH2CH2CH2CH2CH2CH2CH2CH2CH2COOH.
2. Large molecules. Lipids are relatively simple compounds composed of fatty acids attached to glycerol, choline, and other compounds.
Polysaccharides are compounds made up of many simple sugars and modified sugars joined together. These have not been thought to be terribly important in chemical evolution and have not been studied much from that aspect.
Proteins are made up of amino acids, joined in a specific se-quence. Proteiniods are not found in living matter, but can be found in certain experiments which we will discuss below. The bear the same relationship to proteins that the letters IE MDIG KSI SWH EICWOSDKCI WSSSCTRGDIDME EL KFIXCK DS DPW WODJEIDJSJDLFWE IU bear to the sentence TRANSFER THE TERMINAL PHOSPHORYL GROUP FROM ADENOSINE TRIPHOSPHATE TO THE NUMBER SIX CARBON OF GLUCOSE. Nucleic acids are formed by combining purine and pyrimidine bases with ribose or deoxyribose, and joining the resultant products together with a phosphate between each ribose or deoxyribose. Remember that the bases can fit together in only one way. This means that two strings of bases attached to ribose and phosphate (ribonucleic acid or RNA) or 2-deoxyribose and phosphate (deoxyribonucleic acid or DNA) are reverse matches to each other, and if they are separated and the bases of each strand again reverse matched, two new strands of DNA or RNA will be made identical to the old one and to each other, or RNA can be made from DNA and vice versa. The result is a coding system that can specify the manufacture of particular proteins and replicate itself with help from certain enzymes. This preservation, copying, and decoding of information is essential to life.
3. Supramolecular assemblies. When the requisite molecules are mixed, sometimes a supramolecular assembly will spontaneously form. An example is ribosomes, which have been separated into their RNA and protein components. These are then purified and mixed to produce functional ribosomes. Sometimes the assembly is defective, as when this process is applied to membranes. A membrane will form spontaneously across a hole in a barrier when the parts are mixed, but it will not have an inside and an outside as the membrane in a cell has. Sometimes all our efforts to date are incapable of producing the end product, as in the case of mitochondria. Not much experimentation has been done in this area except on viruses, so no definite conclusions can be drawn at this point.
4. Life. Step four, from macromolecular assemblies to life, has never been demonstrated. The only fact we know is that the reverse process, death, occurs to all organisms. In fact, the greatest advance in biology was the discovery that spontaneous generation did not occur (some may argue that biological evolution was the greatest advance in biology, but even it depends on the foundation that life comes only from other life). 
Experimental evidence. Many experiments have been done trying to form the basic building blocks of life. In my seminar I reviewed all the experiments I could find.15
|15A later and more complete review including more than 60 such experiments was done by R Evard and D Schrodetzki (“Chemical Evolution”. Origins 3(1):9-37,1976). Their bibliography is still the best I have seen on the simple experiments. For a popular summary of the evidence from the point of view of scientific orthodoxy, see Orgel LE: “The Origin of Life on the Earth.” Scientific American 1994;271(4):76-83. Other summaries can be found in biochemistry textbooks such as Zubay G., ed.: Biochemistry. 2nd ed. New York: Macmillan Publishing Company, 1988; or Lehninger AL, Nelson DL, Cox MM: Principles of Biochemistry. 2nd ed. New York: Worth Publishers, 1993. A good critique at a popular level from a skeptical point of view is Shapiro R: Origins: A Skeptic’s Guide to the Creation of Life on Earth. Toronto: Bantam Books, 1987. A good critique from a theist (but not special creationist) point of view can be found in Thaxton CB, Bradley WL, Olsen RL: The Mystery of Life’s Origin: Reassessing Current Theories. New York: Philosophical Library, 1984.|
|16There is one statement in Thaxton et al., note 15, p. 27, without a reference. One point to remember is that simply having trace amounts of some compound is not enough to assume it had a major role in the next step in chemical evolution. The questions of quantity and stability must also be considered.|
|17I have read of one “random” experiment where the synthesis of guanine was reported (Ponnamperuma C: “Abiological Synthesis of Some Nucleic Acid Constituents.” In Fox SW, ed.: The Origins of Prebiological Systems and their Molecular Matrices. New York: Academic Press, 1965, pp. 221-36). The adenine and guanine were identified by two-dimensional chromatography, which does not lead to as secure results as gas chromatography/mass spectrometry (see Thaxton et al., note 15, p. 29 note), or perhaps high pressure liquid chromatography or bioassay. The only experiments which have produced pyrimidines were really chemical syntheses, except for an unpublished report (see Thaxton et al., note 15, p. 24).|
|18Chemical Evolution. Oxford: At the Clarendon Press, 1969, Pp. 125-6. Since I did my original research I have found a few other similar experiments which produced ribose. Two experiments actually were reported to produce deoxyribose, although one was essentially a chemical synthesis (Oró J: “Stages and Mechanisms of Prebiological Organic Synthesis.” In Fox, see note 17, pp. 137-62). The other was not considered definitive by the person reporting it (Ponnamperuma in note 17, esp. pp. 223-7). The identification was again by two-dimensional chromatography.|
First, not all of the amino acids used in protein have been made. I have not seen an experiment which produced tryptophan.16 Then all the experiments produced equal numbers of left-handed and right-handed amino acids. And most, if not all (they were not always looked for) produced amino acids not found in proteins. Fatty acids beyond propionic acid have not been formed. In this environment sugars cannot be produced without immediately being destroyed. Sugars attached to phosphate are even more unstable. And I have only read of two bases, adenine and possibly guanine, being produced in measurable amounts.17 Worse yet, the major product of the experiments was hydrogen cyanide. Attempts to get around this difficulty consist of explanations of how hydrogen cyanide is necessary to form amino  acids. That may be true, but I fail to see how cytochrome C would have evolved with no oxygen and abundant hydrogen cyanide.
This curious fact seems to be systematically ignored in all the popular presentations I have read and even in most of the technical presentations. One can only wonder why.
The only successful experiment to produce sugars that I could find was done in 1955 by Melvin Calvin who produced urea, formaldehyde (that’s right, formaldehyde), and formic acid, and took the formaldehyde and added it to water standing on limestone. He got sugars up to 6 carbons, but they decomposed in the presence of limestone unless trapped in more complex material. Of course, there was a lot of leftover formaldehyde, which would wreak havoc on any life that might possibly evolve.18
So from experiments imitating natural processes we have gotten (with the help of traps) all the sugars, most amino acids, one or possibly two purine and no pyrimidine bases, and no fatty acids, along with several incorrect compounds and at least two major toxins. We can’t even get all the major building blocks needed to produce life. I fail to see how we can be expected to score a run when we can’t even get to first base.
Our putting the building blocks together has been no more successful. Fox and colleagues have taken a mixture of amino acids with glutamic and aspartic acids in excess (they had to do this to allow the mixture to melt without decomposing) and produced long chains resembling proteins, except that they were random. These proteinoids did not contain cysteine or tryptophan.
We still are a long way from proteins. Suppose that somehow, out of the soup which the primitive sea was supposed to be, you were able to obtain the 20 protein-forming amino acids in their L-form without any contamination by other amino acids, and heat them. If they all combined equally well, the probability of forming a protein of 250 residues (an average-sized protein) by random combinations is one in 10326. It is much less than the probability of filling the entire universe with sand, with one red grain  in it, and grabbing at random the red grain of sand. You could go on for a billion years checking sand grains and never stand a chance of finding the sand grain you were after.
But supposing you did make a protein. Which would be the most likely protein to make? Obviously one without tryptophan, since tryptophan is neither made nor incorporated into proteinoids by random methods. What kind of proteins do not have tryptophan? Silk fibroin, collagen, maybe elastin (all structural proteins with no catalytic activity), and ribonuclease, which instead of making RNA tears it down.
Consider RNA. There have been several attempts to synthesize RNA by mixing the four bases attached to ribose (remember only one, adenine, has been made by random methods) and polymerizing them by chemical agents or heat. All the experiments I have seen have produced 2´-5´ linkages instead of the natural 3´-5´ linkages. In other words, even given the raw materials, RNA cannot be produced by random methods. This is not simply a very low probability. This is zero probability.
Then consider DNA. DNA can only make 3´-5´ linkages, because there is no hydroxyl group on carbon 2 of 2-deoxyribose. But DNA has to have 2-deoxyribose as well as the bases and phosphates needed for RNA, and 2-deoxyribose has not been made by random methods. As we have seen before, only adenine among the four bases has been made by random methods. So, according to all the evidence available, DNA cannot be produced by random methods.
But supposing you could create DNA of exactly the right length from deoxyribonucleotides. And supposing that only 2 out of every 3 residues needed to be in order. The smallest free-living bacterium I know has a DNA molecule that contains 1.1 million bases. The chances of the right DNA forming are one in 10441,510, that is, 1 divided by 1000000000 . . . (441,493 more zeroes) . . . 00000000 (with this type, it would take about 197 pages [in the original book] to write all the zeroes).
That number is so small it has no meaning to most of us. It is comparable to the chances of dumping a 5-ton truckload of pennies (1,466,666 of them) on the street and having them all turn up heads. Would you believe someone if he said that he tried it and it happened? If someone does, I know where he can get some waterfront property in Florida very cheaply. Another comparison is that it has the same chance of occurring as a measurement 45,091 standard deviations off, which is the same as the chances of finding a man  over 11,278 feet tall (the standard deviation for a man’s height is 3 inches).
“But,” somebody says, “there are a million different bacteria that could have been formed, and there were a billion years when, all around the world, DNA was being formed.” Well, the number of bacteria that can live in an atmosphere containing large quantities of hydrogen cyanide is small. But supposing that there were a billion different kinds of bacteria that could be formed. That would improve the chances to 1 in 10441,501. If the DNA was formed every second for over 3 billion years, the chances would be 1 in 10441,493. And if it was being made in every square inch of the earth’s surface at that rate, the chances would be raised to 1 in 10441,475. By these assumptions we have raised the chances by 1035, but we still have so far to go before it becomes remotely possible that it is not worth considering seriously. The estimates in the foregoing calculations were extremely liberal, and from the evidence we have, the conditions we assumed about the abundance of nucleotides did not exist.
So we can’t form all the amino acids, any of the membrane-forming fatty acids, more than one purine base, the sugar needed for DNA, usable proteins, or RNA in its normal form, and the chances of producing DNA are worse than astronomically small. It would seem that the chances of producing life are less than those of producing the Oxford English dictionary by an explosion in a print factory. They are more like those of producing said dictionary by an explosion in a Russian print factory.
But it gets worse. Even getting life once we have the macromolecules is impossible. Every can of chicken soup is a testimony to this. Chicken soup has all the DNA, RNA, and protein we could wish, including properly formed enzymes, with plenty of lipids to make membranes and sugars for energy and structural needs. It is even in almost the right order. And yet Campbell’s makes lots of money because bacteria don’t reconstitute once they are disorganized beyond a certain point. Primordial soup has to have been much thinner than chicken soup and not nearly as nutritious. It would take a miracle to get life out of primordial soup.
And that is precisely what we are talking about. The origin of life was a miracle. We have not settled when the Creator did it. We certainly have not settled how But that the Creator “stepped into” our universe after it was formed, and performed another act of creation that involves us directly, there is no reasonable doubt.19 
|19Again it could theoretically be a different God, but again Ockham’s Razor dictates that we identify this God with the previous One without evidence to the contrary.
Strictly speaking, the Creator was never “out of” our universe, as we have seen from quantum theory. He simply acted in a way that is not consistent with all known physical laws, specifically defying the second law of thermodynamics. It may very well be that in superseding physical law He was guided by a higher law, perhaps one that takes into account the thought processes of intelligent beings.
Therefore, the position that science can explain everything is disproved by the results of science itself. It is a pervasive position in our time, and we will have to constantly be alert for it, but we need not bow in obeisance to it. We will have to keep this in mind as we work on the rest of our theology.
To summarize, from nature we can learn that Whoever God is, He runs His universe in a very orderly way the vast majority of the time. Miracles should be relatively rare, and one might expect them also to exhibit orderliness. We can also learn that He started the universe, that he is intimately involved in its day-to-day operation, and that when He chooses He can step in and act creatively, and that life seems to have a special place in His heart.20
|20It is true that we have not “mathematically” proved the above assertions (although after Lakatos one might be suspicious of that kind of “proof” as well), but there is no credible evidence for any other theory given our present knowledge. The current scientific orthodoxy is, on this issue, struggling against the scientific evidence on the origin of life. Atheism and pure Deism are maintaining a belief in spontaneous generation in defiance of the known facts, similar to the manner of the Flat Earth Society.
Some may try to denigrate the above arguments by saying that they are a sophisticated form of the “God of the gaps” argument, which has supposedly been discredited by science. This objection ignores the fact that these particular gaps were discovered by science, and are growing more firm with time rather than less so. In prescientific times the universe was believed to be static, causality was believed to be universal by many, and spontaneous generation was an accepted “fact”. It is science that has destroyed this picture of the universe.
There may also be those who are uncomfortable basing theological positions on scientific arguments instead of the Bible. These may note that the approach recorded in the book of Job (38-41) as used by God to convince Job of the error of his theology is precisely the same kind of approach. “Where were you when I laid the foundation of the earth? . . . Where is the way to the dwelling of light . . . ? . . . Who has let the wild ass go free?” If God can use a “God of the gaps” argument, it would seem that we are justified in doing the same.
In fact, I am not sure that there is any other kind of argument for God, or anything else for that matter. Any entity is inferred by its effect on our senses, or on its effect on something that affects our senses.