Noumena I: Spacetime and its discontents Noumena II: Quantum weirdnessFootnotes
https://manyworlds784.blogspot.com/p/footnotes.html
Noumena I: Spacetime and its discontents
Newton with his belief in absolute space and time considers motion a proof of the creation of the world out of God's arbitrary will, for otherwise it would be inexplicable why matter moves in this [relative to a fixed background frame of reference] rather than any other direction. -- Hermann Weyl (60).
Weyl, a mathematician with a strong comprehension of physics, had quite a lot to say about spacetime. For example, he argued that Mach's principle, as adopted by Einstein, was inconsistent with general relativity.
Background on Weyl
http://plato.stanford.edu/entries/weyl/#LawMotMacPriWeyCosPos
Weyl's book 'Symmetry' online
https://archive.org/details/Symmetry_482
See also my paper,
Einstein, Sommerfeld and the Twin Paradox
http://paulpages.blogspot.com/2013/10/einstein-sommerfeld-and-twin-paradox.html
Einstein had hoped to deal only with "observable facts," in accord with Mach's empiricist (and logical positivist) program, and hence to reduce spacetime motions to relative actions among bodies, but Weyl found that such a level of reduction left logical holes in general relativity. One cannot, I suggest, escape the background frame, even if it is not a strictly Newtonian background frame. Sometimes this frame is construed to be a four-dimensional spacetime block.
So how would one describe the "activity" of a four-dimensional spacetime block? Something must be going on, we think, yet, from our perspective looking "out," that something "transcends" space and time.
Popper, in his defense of phenomenal realism, objected that the spacetime block interpretation of relativity theory implies that time and motion are somehow frozen, or not quite real. While not directly challenging relativity theory, he objected to such a manifold and ruled it out as not in accord with reality as he thought reality ought to be. But, we hasten to make note of Popper's trenchant criticism of the logical positivism of most scientists.
My thought: "Laws" of nature, such as Einstein's law of universal gravitation, are often thought of in a causative sense, as in "the apple drops at 9.81 meters per second squared by cause of the law of gravity."
Actually, the law describes a range of phenomena which are found to be predictable via mathematical formulas. We have a set of observable relations "governed" by the equations. If something has mass or momentum, we predict that it will follow a calculated trajectory. But, as Newton knew, he had an algorithm for representing actions in nature, but he had not got to the world beneath superficial appearances. How does gravity exercise action at a distance? If you say, via curved spacetime fields, one may ask, how does spacetime "know" to curve?
We may say that gravity is a principle cause of the effect of a rock falling. But, in truth, no one knows what gravity is. "Gravity" is a word used to represent behavior of certain phenomena, and that behavior is predictable and calculable, though such predictability remains open to Hume's criticism.
On this line, it should be noted that Einstein at first resisted basing what became his General Theory of Relativity on geometrical (or, topological) representations of space and time. He thought that physical insight should accompany his field equations, but eventually he settled on spacetime curvature as insight enough. His competition with David Hilbert may well have spurred him to drop that proviso. Of course, we all know of his inability to accept the lack of "realism" implied by quantum mechanics, which got the mathematics right but dispensed with certain givens of phenomenal realism. To this end, we note that he once said that he favored the idea that general relativity's mathematics gave correct answers without accepting the notion that space and time were "really" curved.
Newton had the same problem: There was, to him, an unsatisfactory physical intuition for action at a distance. Some argue that this difficulty has been resolved through the use of "fields," which act as media for wave motion. The electromagnetic field is invoked as a replacement for the ether that Einstein ejected from physics as a useless concept. Still, Einstein saw that the field model was intuitively unsatisfactory.
As demonstrated by the "philosophical" issues raised by quantum theory, the problem is the quantization of energies needed to account for chains of causation. When the energy reaches the quantum level, there are "gaps" in the chain. Hence the issue of causation can't easily be dismissed as a problem of "metaphysics" but is in truth a very important area of discussion on what constitutes "good physics."
One can easily visualize pushing an object, but it is impossible to visualize pulling an object. In everyday experience, when one "pulls," one is in fact pushing. Yet, at the particle level, push and pull are complementary properties associated with charge sign. This fact is now sufficiently familiar as not to seem occult or noumenal. Action at a distance doesn't seem so mysterious, especially if we invoke fields, which are easy enough to describe mathematically, but does anyone really know what a field is? The idea that gravitation is a macro-effect from the actions of gravitons may one day enhance our understanding of nature. But that doesn't mean we really know what's going on at the graviton level.
Gott (61), for example, is representative of numerous physicists who see time as implying many strange possibilities. And Goedel had already argued in the 1940s that time must not exist at all, implying it is some sort of illusion. Goedel had found a solution to Einstein's field equations of general relativity for a rotating universe in which closed time loops exist, meaning a rocket might travel far enough to find itself in its past. Einstein shrugged off this finding of his good friend, arguing that it does not represent physical reality. But Goedel countered that if such a solution exists at all, then time cannot be what we take it to be and doesn't actually exist (62).
These days, physicists are quite willing to entertain multiple dimension theories of cosmology, as in the many dimensions of string theory and M theory.
We have Penrose's cyclic theory of the cosmos (63), which differs from previous Big Bang-Big Crunch cyclic models. Another idea comes from Paul J. Steinhardt, who proposes an "ekpyrotic universe" model. He writes that his model is based on the idea that our hot big bang universe was formed from the collision of two three-dimensional worlds moving along a hidden, extra dimension. "The two three-dimensional worlds collide and 'stick,' the kinetic energy in the collision is converted to quarks, electrons, photons, etc., that are confined to move along three dimensions. The resulting temperature is finite, so the hot big bang phase begins without a singularity."
Steinhardt on the ekpyrotic universe
http://wwwphy.princeton.edu/~steinh/npr/
The real point here is that spacetime, whatever it is, is rather strange stuff. If space and time "in the extremes" hold strange properties, should we not be cautious about assigning probabilities based on absolute Newtonian space and equably flowing time? It is not necessarily a safe assumption that what is important "in the extremes" has no relevance locally.
And yet, here we are, experiencing "time," or something. The difficulty of coming to grips with the meaning of time suggests that beyond the phenomenal world of appearances is a noumenal world that operates along the lines of Bohm's implicate order, or -- in his metaphor -- of a "holographic universe."
But time is even more mind-twisting in the arena of quantum phenomena (as discussed in Noumena II, below).
The "anthropic cosmological principle" has been a continuing vexation for cosmologists (64). Why is it that the universe seems to be so acutely fine-tuned to permit and encourage human life? One answer is that perhaps we are in a multiverse, or collection of noninteracting or weakly interacting cosmoses. The apparent miniscule probability that the laws and constants are so well suited for the appearance of humans might be answered by increasing the number and variety of cosmoses and hence increasing the distribution of probabilities for cosmic constants.
The apparent improbability of life is not the only reason physicists have for multiverse conjectures. But our concern here is that physicists have used probabilistic reasoning on a question of the existence of life. This sort of reasoning is strongly reminiscent of Pascal's wager and I would argue that the question is too great for the method of probability analysis. The propensity information is far too iffy, if not altogether zero. Yet, that doesn't mean the problem is without merit. To me, it shows that probability logic cannot be applied universally and that it is perforce incomplete. It is not only technically incomplete in Goedel's sense, it is incomplete because it fundamentally rests on the unknowable.
Paul Davies, in the Guardian, wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping 'meta-laws' that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained -- eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes, "Although cosmology has advanced enormously since the time of Laplace, the situation remains the same: there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters."
Davies on the multiverse
http://www.theguardian.com/commentisfree/belief/2010/sep/04/stephen-hawking-big-bang-gap
Noumena II: Quantum weirdness
The double-slit experiment
The weird results of quantum experiments have been known since the 1920s and are what led Werner Heisenberg to his breakthrough mathematical systemization of quantum mechanics.
An example of quantum weirdness is the double-slit experiment, which can be performed with various elementary particles. Consider the case of photons, in which the intensity of the beam is reduced to the point that only one photon at a time is fired at the screen with the slits. In the case where only one slit is open, the photo-plate detector on the other side of the screen will record basically one spot where the photons that make it through the slit arrive in what one takes to be a straight line from source to detector.
However, when two slits are open the photons are detected at different places on the plate. The positions are not fully predictable, and so are random within constraints. After a sufficient number of detections, the trained observer notices a pattern. The spots are forming a diffraction pattern associated with how one would expect a wave going through the two slits to separate into two subwaves, and simultaneously interact, showing regions of constructive and destructive wave interference. However, the components of the "waves," are the isolated detection events. From this effect, Max Born described the action of the particles in terms of probability amplitudes, or that is, waves of probability.
This is weird because there seems to be no way, in terms of classical causality, for the individual detection events to signal to the incoming photons where they ought to land. It also hints that the concept of time isn't what we typically take it to be. That is, one might interpret this result to mean that once the pattern is noticed, one cannot ascribe separate time units to each photon (which seems to be what Popper, influenced by Landes, was advocating). Rather, it might be argued that after the fact the experiment must be construed as an irreducible whole. This bizarre result occurs whether the particles are fired off at the velocity of light or well below it.
Schroedinger's cat
In 1935, Erwin Schroedinger, proposed what has come to be known as the Schroedinger's cat thought experiment in an attempt to refute the idea that a quantum property in many experimental cases cannot be predicted precisely, but can only be known probabilistically prior to measurement. Exactly what does one mean by measurement? In the last analysis, isn't a measurement an activity of the observer's brain?
To underscore how ludicrous he thought the probability amplitude idea is, Schroedinger gave this scenario: Suppose we place a cat in a box which contains a poison gas pellet rigged to a Geiger counter that measures radioactive decay. The radioactive substance has some suitable half-life, meaning there is a probability that it is detected or not.
Now in the standard view of quantum theory, there is no routine causation that can be accessed that gives an exact time the detection will occur. So then, the property (in this case, the time of the measurement) does not exist prior to detection but exists in some sort of limbo in which the quantum possibilities -- detection at time interval x and non-detection at time interval x -- are conceived of as a wave of probabilities, with the potential outcomes superposed on each other.
So then, demanded Schroedinger, does not this logically require that the cat is neither alive nor dead prior to the elapse of the specified time interval?! Of course, once we open the box, the "wave function collapses" and the cat's condition -- dead or alive -- tells us whether the quantum event has been detected. The cat's condition is just as much of a detection event as a photo-plate showing a bright spot.
Does this not then mean that history must be observer-centric? However, no one was able to find a way out of this dilemma, despite many attempts (see Toward). Einstein conceded that such a model was consistent, but rejected it on philosophical grounds. You don't really suppose the moon is not there when you aren't looking, he said.
The EPR scenario
In fact, also in 1935, Einstein and two coauthors unveiled another attack on quantum weirdness known as the Einstein-Podolsky-Rosen (EPR) thought experiment, in which the authors pointed out that quantum theory implies what Einstein called "spooky action at a distance" that violated C, the velocity of light in a vacuum, which is an anchor of his theory of relativity. Later John Bell found a way to apply a test to see whether statistical correlation would uphold the spooky quantum theory. Experiments by Alain Aspect in the 1980s and by others have confirmed, to the satisfaction of most experts, that quantum "teleportation" occurs.
So we may regard a particle as carrying a potential for some property or state that is only revealed upon detection. That is the experiment "collapses the wave function" in accordance with the property of interest. Curiously, it is possible to "entangle" two particles of the same type at some source. The quantum equations require that each particle carries the complement property of the other particle -- even though one cannot in a proper experiment predict which property will be detected first.
Bohm's version of EPR is easy to follow: An electron has a property called "spin." Just as a screw may rotate left or right, so an electron's spin is given as "up" or "down," which is where it will be detected in a Stern-Gerlach device. There are only two possibilities, because the electron's rotational motion is quantized into halves -- as if the rotation jumps immediately to its mirror position without any transition, just as the Bohr electron has specific discontinuous "shells" around a nucleus.
Concerning electron spin
http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html
So if we entangle electrons at a source and send them in different directions, then, quantum theory declares that if we detect spin "up" at detector A, it is necessarily so that detector B ought to read spin "down."
In that case, as Einstein and his coauthors pointed out, doesn't that mean that the detection at A required a signal to reach B faster than the velocity of light?
For decades, EPR remained a thought experiment only. A difficulty was that detectors and their related measuring equipment tend to be slightly cranky, giving false positives and false negatives. It may be that error correction codes might have reduced the problem, but it wasn't until Bell introduced his statistical inequalities that the possibility arose of conducting actual tests of correlation.
In the early 1980s Aspect arranged photon experiments that tested for Bell's inequalities and made the sensational discovery that the correlation showed that Einstein was wrong and that detection of one property strongly implied that its "co-particle" would detect for the complement property. (We should tip our hats both to Schroedinger and Einstein for the acuity of their thought experiments.) Further, Aspect did experiments in which monitors were arranged so that any signal from one particle to another would necessarily exceed the velocity of light. Even so, the results held.
This property of entanglement is being introduced into computer security regimens because if, say, the NSA or other party is looking at the data stream, the use of some entangled particles can be used to tip off the sender that the stream is being observed.
Hidden variables
John Von Neumann, contradicting Einstein, published a proof that quantum theory was complete in Heisenberg's sense and that "hidden variables" could not be used to devise causal machinery to explain quantum weirdness. Intuitively, one can apprehend this by noting that if one thinks of causes as undetected force vectors, then Planck's constant means that there is a minimum on the amount of force (defined in terms of energy) that can exist insofar as detection or observation. If we think of causes in terms of rows of dominos fanning out and at points interacting, we see there is nothing smaller than the "Planck domino." So there are bound to be gaps in what we think of as "real world causation."
Popper objected to Von Neumann's claim on grounds that after it was made, discoveries occurred in the physics of the nucleus that required "new" variables. Yet if hidden variables are taken to mean the forces of quantum chromodynamics and the other field theories, these have no direct influence on the behaviors of quantum mechanics (now known as quantum field theory). Also, these other theories are likewise subject to quantum weirdness, so if we play this game, we end up with a level where the "variables" run out.
We should note that by "hidden variable," Von Neumann evidently had in mind the materialist viewpoint of scientists like Bohm whose materialism led him to reject the minimalist ideas of the "Copenhagen interpretation" whereby what one could not in principle observe simply doesn't count. Instead, Bohm sought what might be called a pseudo-materialist reality in which hidden variables are operative if one concedes the bi-locality inherent in entanglement. In fact, I tend to agree with Bohm's view of some hidden order, as summarized by his "holographic universe" metaphor. On the other hand, I do not agree that he succeeded in his ambition to draw a sharp boundary between the "real external world" and subjective perception.
Bohm quotes John Archibald Wheeler:
"No phenomenon is a phenomenon until it is an observed phenomenon" so that "the universe does not exist 'out there' independently of all acts of observation. It is in some strange sense a participatory universe. The present choice of mode of observation... should influence what we see about the past... the past is undefined and undefinable without the observation" (65).
"We can agree with Wheeler that no phenomenon is a phenomenon until it is observed, as by definition, a phenomenon is what appears. Therefore it evidently cannot be a phenomenon unless it is the content of an observation," Bohm says, adding, "The key point in an ontological interpretation such as ours is to ask the question as to whether there is an underlying reality that exists independently of observation" (66).
Bohm argues that a "many minds" interpretation of quantum effects removes "many of the difficulties with the interpretation of [Hugh] Everett and [Bryce] DeWitt (67), but requires making a theory of mind basically to account for the phenomena of physics. At present we have no foundations for such a theory..." He goes on to find fault with this idea.
And yet, Bohm sees that "ultimately our overall world view is neither absolutely deterministic nor absolutely indeterministic," adding: "Rather it implies that these two extremes are abstractions which constitute different views or aspects of the overall set of appearances" (68).
So perhaps the thesis of determinism and the antithesis of indeterminism resolve in the synthesis of the noumenal world. In fact, Bohm says observables have no fundamental significance and prefers an entity dubbed a "be-able," again showing his "implicate order" has something in common with our "noumenal world." And yet our conceptualization is at root more radical than is his.
One specialist in relativity theory, Kip S. Thorne (69), has expressed a different take. Is it possible that the spacetime continuum, or spacetime block, is multiply connected? After all if, as relativity holds, a Riemann topology holds for expressing spacetime, then naive Euclidean space is not operative, except vanishingly close to the curvature functions. So in that case, it shouldn't be all that surprising that spacetime might have "holes" connecting one region to another. Where would such wormholes be most plausible? In black holes, Thorne says. By this, the possibility of a "naked singularity" is addressed. The singularity is the point at which Einstein's field equations cease to be operative; the presumed infinitely dense point at the center of mass doesn't exist because the wormhole ensures that the singularity never occurs; it smooths out spacetime (70).
One can see an analog of this by considering a sphere, which is the surface of a ball. A wormhole would be analogous to a straight-line tunnel connecting Berlin and London by bypassing the curvature of the Earth. So on this analogy, one can think of such tunnels connecting different regions of spacetime. The geodesic -- analogous to a great circle on a sphere -- yields the shortest distance between points in Einstein spacetime. But if we posit a manifold, or cosmic framework, of at least five dimensions then one finds shortcuts, topologically, connecting distinct points on the spacetime "surface." Does this accord with physical reality? The answer is not yet in.
Such wormholes could connect different points in time without connecting different regions of space, thereby setting up a time travel scenario, though he is quoted as arguing that his equation precludes time travel paradoxes.
Thorne's ideas on black holes and wormholes
https://en.wikipedia.org/wiki/Kip_Thorne
The standard many-worlds conjecture is an interpretation of quantum mechanics that asserts that a universal wave function represents objective phenomenal reality. So there is no intrinsically random "collapse of the wave function" when a detection occurs. The idea is to be rid of the Schroedinger cat scenario by requiring that in one world the cat is alive and in another it is dead. The observer's world is determined by whether he detects cat dead or cat alive. These worlds are continually unfolding.
The key point here is the attempt to return to a fully deterministic universe, a modern Laplacian clockwork model. However, as the observer is unable to foretell which world he will end up in, his ignorance (stemming from randomness1 and randomness2) is tantamount to intrinsic quantum randomness (randomness3).
In fact, I wonder how much of a gain there is in saying Schroedinger's cat was alive in one world and dead in another prior to observation as opposed to saying the cat was in two superposed states relative to the observer.
On the other hand it seems likely that Hawking favors the notion of a universal wave function because it implies that information represents hard, "external" reality. But even so, the information exists in superposed states as far as a human observer is concerned.
At present, there is no means of calculating which world the cat's observer will find himself in. He can only apply the usual quantum probability methods.
Time-bending implications
What few have understood about Aspect's verification of quantum results is that time itself is subject to quantum weirdness.
A logical result of the entanglement finding is this scenario:
We have two detectors, A which is two meters from the source and B which is one meter distant. You are positioned at detector A and cannot observe B. Detector A goes off and registers, say, spin "down." You know immediately that Detector B must read spin "up" (assuming no equipment-generated error). That is, from your position, the detector at B went off before your detector at A. If you like, you may in principle greatly increase the scale of the distances to the detectors. It makes no difference. B seems to have received a signal before you even looked at A. It's as if time is going backward with respect to B, as far as you are concerned.
Now it is true that a great many physicists agree with Einstein in disdaining such scenarios, and assume that the picture is incomplete. However, incomplete or not, the fact is that the observer's sense of time is stood on its head. And this logical implication is validated by Aspect's results.
Now, let's extend this experimentally doable scenario with a thought experiment reminiscent of Schroedinger's cat. Suppose you have an assistant stationed at detector B, at X kilometers from the source. You are at X/2 kilometers from the source. Your assistant is to record the detection as soon as it goes off, but wait for your call to report the property. As soon as you look at A, you know his property will be the complement of yours. So was he in a superposed state with respect to you? Obnoxious as many find this, the logical outcome, based on the Aspect experiments and quantum rules, is yes.
True, you cannot in relativity theory receive the information from your assistant faster than C, thus presenting the illusion of time linearity. And yet, I suggest, neither time nor our memories are what we suppose them to be.
The amplituhedron
When big particle accelerators were introduced, it was found that Richard Feynman's diagrams, though conceptually useful, were woefully inadequate for calculating actual particle interactions. As a result, physicists have introduced a remarkable calculational tool called the "amplituhedron." This is a topological object that exists in higher-dimensional space. Particles are assumed to follow the rules in this object, and not the rules of mechanistic or pseudo-mechanistic and continuous Newtonian and Einsteinian spacetime.
Specifically, it was found that the scattering amplitude equals the volume of this object. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron.
It has been suggested that the amplituhedron, or a similar geometric object, could help resolve the perplexing lack of commensurability of particle theory and relativity theory by removing two deeply rooted principles of physics: locality and unitarity.
“Both are hard-wired in the usual way we think about things,” according to Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton. “Both are suspect.”
Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.
At this point I interject that an axiom of nearly all probability theories is that the probabilities of the outcome set must equal 1. So if, at a fundamental, noumenal level, this axiom does not hold, what does this bode for the whole concept of probability? At the very least, we sense some sort of nonlinearity here. (At this point we must acknowledge that quantum physicists have for decades used negative probabilities with respect to the situation before the "collapse of the wave function," but "negative unity" is preserved.)
Mark Burgin on negative probabilities
http://arxiv.org/ftp/arxiv/papers/1008/1008.1287.pdf
Wikipedia article on negative probabilities
https://en.wikipedia.org/wiki/Negative_probability
According to the article linked below, scientists have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. This amplituhedron's volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, are conceived of as existing on the faces of this master structure.
“They are very powerful calculational techniques, but they are also incredibly suggestive,” said one scientist. “They suggest that thinking in terms of space-time was not the right way of going about this.”
“We can’t rely on the usual familiar quantum mechanical spacetime pictures of describing physics,” said Arkani-Hamed. “We have to learn new ways of talking about it. This work is a baby step in that direction.”
So it indeed looks as though time and space are in fact some sort of illusion.
In my estimate, the amplituhedron is a means of detecting the noumenal world that is beyond the world of appearances or phenomena. Quantum weirdness implies that interactions are occurring in a way and place that do not obey our typical perceptual conceits. It's as if, in our usual perceptual state, we are encountering the "shadows" of "projections" from another "manifold."
Simons Foundation article on the amplituhedron
https://www.simonsfoundation.org/quanta/20130917-a-jewel-at-the-heart-of-quantum-physics/
The spacetime block of relativity theory likewise suggests that there is a realm that transcends ordinary energy, time and motion.
Zeno's paradox returns
Motion is in the eye of the beholder.
When an object is lifted to height n, it has a specific potential energy definable in terms of Planck's energy constant. Hence, only the potential energies associated with multiples of Planck's constant are permitted. In that case, only heights associated with those potential energies are permitted. When the object is released and falls, its kinetic energy increases with the acceleration. But the rule that only multiples of Planck's constant are permitted means that there is a finite number of transition heights before the object hits the ground. So what happens between quantum height y and quantum height y - 1?
No doubt Zeno would be delighted with the answer:
The macro-object can't cross these quantum "barriers" via what we think of as motion. The macro-object makes a set of quantum jumps across each "barrier," exactly like electrons in an atom jumping from one orbital probability shell to another.
Here we have a clear-cut instance of the "macro-world" deceiving us, when in fact "motion" must occur in quantum jumps. This is important for not only what it says about motion, but also because it shows that the macro-world is highly -- not minimally -- interactive with the "quantum world." Or that is, that both are highly interactive with some noumenal world that can only be apprehended indirectly.
Even in classical physics, notes Popper in his attack on the Copenhagen interpretation, if acceleration is measured too finely, one finds one gets an indeterminate value, as in a = 0/0 (71).
Even on a cosmic scale, quantum weirdness is logically required.
Cosmic weirdness
Suppose we had a theory of everything (ToE) algorithm. Then at proper time ta we will be able to get a snapshot of the ToE waveform -- obtained from the the evolving net ToE vector -- from ta to tb. It is pointless to decompose the waveform below the threshold set by Planck's constant. So the discrete superpositions of the ToE, which might be used to describe the evolution of the cosmos, cannot be reduced to some continuum level. If they could be reduced infinitely, then the cosmic waveform would in effect represent a single history. But, the fact that the waveform is composed of quantum harmonics means that more than one history (and future) is in superposition.
In this respect, we see that quantum theory requires "many universes," though not necessarily in the sense of Hugh Everett or of those who posit "bubble" universes.
Many will object that what we have is simply an interpretation of the meaning of quantum theory. But, I reply that once hidden variables are eliminated, and granted the success of the Aspect experiments, quantum weirdness logically follows from quantum theory.
EPR, action at a distance, special relativity and the fuzziness of motion and time and of the cosmos itself, all suggest that our reality process only reflects but cannot represent noumenal reality. That is, what we visualize and see is not what's actually behind what we visualize and see. Quantum theory gives us some insight into how noumena are mapped into three- and four-dimensional phenomena, but much remains uncharted.
So if phenomenon A correlates with phenomenon B, we may be able to find some algorithm that predicts this and other outcomes with a probability near 1. But if A and B are phenomena with a relation determined in a noumenal "world," then what is to prevent all sorts of oddities that make no sense to phenomenalist physicists? Answer: If so, it might be difficult to plumb such a world, just as a shadow only discloses some information about the object between the projection and the light source.
Physicists are, I would say, somewhat more likely to accept a nonlinearity in causality than are scientists in general. For example, Brian Josephson, a Nobel laureate in physics, favors a radical overhaul of physical knowledge by taking into account such peculiarities as outlined by John A. Wheeler, who proposes a "participatory universe." Josephson believes C.S. Peirce's semiotics combined with a new approach to biology may help resolve the impasses of physics, such as the evident incommensurability of the standard model of particle physics with the general theory of relativity.
Josephson on a 'participatory universe'
http://arxiv.org/pdf/1108.4860v4.pdf
And Max Tegmark argues that the cosmos has virtually zero algorithmic information content, despite the assumption that "an accurate description of the state of the universe appears to require a mind-bogglingly large and perhaps even infinite amount of information, even if we restrict our attention to a small subsystem such as a rabbit."
But, he says that if the Schroedinger equation is universally valid, then "decoherence together with the standard chaotic behavior of certain non-linear systems will make the universe appear extremely complex to any self-aware subsets that happen to inhabit it now, even if it was in a quite simple state shortly after the big bang."
Tegmark's home page
http://space.mit.edu/home/tegmark/home.html
Roger Penrose has long been interested in the "huge gap" in the understanding of physics posed by the Schroedinger's cat scenario. He sees this issue as strongly suggestive of a quantum influence in consciousness -- consciousness being crucial to the collapse of Schroedinger's wave function.
He and Stuart Hameroff, a biologist, propose that microtubules in the brain are where the relevant quantum activities occur.
Even though Penrose is attempting to expunge the problem of superposed realities from physics with his novel proposal, the point to notice here is that he argues that the quantum enigma is indicative of something beyond current physical knowledge that must be taken into account. The conscious mind, he claims, is not at root doing routine calculations. That chore is handled by the unconscious autonomic systems, he says.
In our terms, he is pointing to the existence of a noumenal world that does not operate in the routine "cause-effect" mode of a calculational model.
The 'Orch OR' model for consciousness
http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html
Penrose talk on quantum activity in consciousness
https://www.youtube.com/watch?v=3WXTX0IUaOg
On the other hand, there has always been a strong belief in non-illusional reality among physicists. We have Einstein and Popper as notable examples. Popper was greatly influenced by Alfred Landes whose strong opposition to the Copenhagen interpretation is spelled out in books published well before Aspect's experiments had confirmed bilocality to the satisfaction of most physicists (72).
Yet, the approach of many probability theorists has been to ignore these sorts of implications. Carnap's attitude is typical. In his Logical Foundations of Probability (73), Carnap mentions a discussion by James Jeans of the probability waves of quantum mechanics, which Jeans characterizes as "waves of knowledge," implying "a pronounced step in the direction of mentalism" (74).
But Carnap breezes right past Jeans's point, an omission that I hazard to guess calls into question the logical foundation of Carnap's whole system -- though I note that I have not attempted to plow through the dense forest of mathematical logic symbols in Carnap's book.
I have tried to address some of the issues in need of exploration in my paper Toward, which discusses the reality construction process and its implications. Many worlds of probability is intended as a companion to that paper. Please see
Appendix D. Toward a signal model of perception
https://manyworlds784.blogspot.com/p/contents-overview-philosophy-time-and.html
We have two opposing tendencies: On the one hand, our experiments require that detections occur according to a frequency-style "probability wave," in which the probability of a detection is constrained by the square of the wave amplitude. If numerous trials are done, the law of large numbers will come into effect in, say, the correlation found in an Aspect experiment. So our sense is that quantum probabilities are intrinsic, and that quantum randomness is fundamental. That is, quantum propensities verify an "objective external reality."
On the other hand, the logical implication of such randomness -- as demonstrated in the double-slit and Aspect experiments -- is that what we call reality must be more subjective than usually supposed, that various histories (and hence futures) are superpositions of potential outcomes and do not actualize until observation in the form of cognitive focus (which may be, for all we know, partly unconscious). So one's mental state and train of thought must influence -- after the manner of the science fiction film The Matrix -- one's perceived world. This is the asymmetric three-sink vector field problem. Where will the fixed point (as in center of mass or gravity) be found?
So then assignment of probabilities may seem to make sense, but only if one neglects the influence of one's mind on the perceived outcomes. As long as you stay in your assumed "world," probability calculations may work well enough -- as you inhabit a world that "goes in that direction" (where many people micromanage "the future" in terms of probabilities).
This logical outcome of course is what Popper and many others have objected to. But, despite a great many attempts to counter this line of thought, none have succeeded (as I argue in Toward). At the end of his career, Popper, faced with the Aspect results, was reduced to a statement of faith: even if bi-locality holds, his belief in classical realism would not be shaken.
He points to the horror of Hiroshima and Nagasaki and the "real suffering" of the victims as a moral reason to uphold his objectivism. That is, he was arguing that we must not trivialize their pain by saying our perception of what happened is a consequence of some sort of illusion (75).
Einstein, it seems clear, implicitly believed in a phenomenal world only, though his own progress with relativity required the ditching of such seemingly necessary phenomena as the ether as mediating not only light waves but gravitational waves. In his more mature period, he conceded that the spacetime continuum devised by himself and Minkowski was in effect an ether. My estimate is that Einstein mildly conceded a noumenal world, but resisted a strong dependence on such a concept. Bohm, who favored a form of "realism," settled on a noumenal world with the analogies of a holographic universe and of the "implicate" order shown by an ink blob that unravels when spun in a viscous fluid and pretty much exactly is restored to its original state when its spin is reversed. Phenomena are observed because of some noumenal relation.
So we say there is some physical process going on which we can only capture in part. By probability wave, we mean that we may use a wave model to represent what we can know about the unfolding of the process. The probability wave on the one hand implies an objective reality but on the other a reality unfolding in a feedback loop within one's brain-mind.
Waves imply change. But whatever is going on in some noumenal realm is partly predictable in terms of probability of observed properties. That is, the probability wave function is a means of partly predicting event observations but we cannot say it models the noumenal process precisely or at all.
As Jeans put it:
"Heisenberg attacked the enigma of the physical universe by giving up the main enigma -- the nature of the physical universe -- as insoluble, and concentrating on the minor puzzle of co-ordinating our observations of the universe. Thus it is not surprising that the wave picture which finally emerged should prove to be concerned solely with our knowledge of the universe as obtained from our observations (76)."
This pragmatic idea of ignoring the noumenal, or as some might prefer, sub-phenomenal, world has been largely adopted by practicing scientists, who also adopt a background assumption that discussion of interpretation is philosophy and hence outside science. They accept Popper's view that there exists a line dividing science from meta-science and similarly his view that interpretations are not falsifiable. And yet, a counterexample to that belief is the fact that Einstein's interpretation barring bi-localism was falsified, in the minds of most physicists, by the experiments of Aspect.
Go to Chapter 10 HERE.
Newton with his belief in absolute space and time considers motion a proof of the creation of the world out of God's arbitrary will, for otherwise it would be inexplicable why matter moves in this [relative to a fixed background frame of reference] rather than any other direction. -- Hermann Weyl (60).
Weyl, a mathematician with a strong comprehension of physics, had quite a lot to say about spacetime. For example, he argued that Mach's principle, as adopted by Einstein, was inconsistent with general relativity.
Background on Weyl
http://plato.stanford.edu/entries/weyl/#LawMotMacPriWeyCosPos
Weyl's book 'Symmetry' online
https://archive.org/details/Symmetry_482
See also my paper,
Einstein, Sommerfeld and the Twin Paradox
http://paulpages.blogspot.com/2013/10/einstein-sommerfeld-and-twin-paradox.html
Einstein had hoped to deal only with "observable facts," in accord with Mach's empiricist (and logical positivist) program, and hence to reduce spacetime motions to relative actions among bodies, but Weyl found that such a level of reduction left logical holes in general relativity. One cannot, I suggest, escape the background frame, even if it is not a strictly Newtonian background frame. Sometimes this frame is construed to be a four-dimensional spacetime block.
So how would one describe the "activity" of a four-dimensional spacetime block? Something must be going on, we think, yet, from our perspective looking "out," that something "transcends" space and time.
Popper, in his defense of phenomenal realism, objected that the spacetime block interpretation of relativity theory implies that time and motion are somehow frozen, or not quite real. While not directly challenging relativity theory, he objected to such a manifold and ruled it out as not in accord with reality as he thought reality ought to be. But, we hasten to make note of Popper's trenchant criticism of the logical positivism of most scientists.
My thought: "Laws" of nature, such as Einstein's law of universal gravitation, are often thought of in a causative sense, as in "the apple drops at 9.81 meters per second squared by cause of the law of gravity."
Actually, the law describes a range of phenomena which are found to be predictable via mathematical formulas. We have a set of observable relations "governed" by the equations. If something has mass or momentum, we predict that it will follow a calculated trajectory. But, as Newton knew, he had an algorithm for representing actions in nature, but he had not got to the world beneath superficial appearances. How does gravity exercise action at a distance? If you say, via curved spacetime fields, one may ask, how does spacetime "know" to curve?
We may say that gravity is a principle cause of the effect of a rock falling. But, in truth, no one knows what gravity is. "Gravity" is a word used to represent behavior of certain phenomena, and that behavior is predictable and calculable, though such predictability remains open to Hume's criticism.
On this line, it should be noted that Einstein at first resisted basing what became his General Theory of Relativity on geometrical (or, topological) representations of space and time. He thought that physical insight should accompany his field equations, but eventually he settled on spacetime curvature as insight enough. His competition with David Hilbert may well have spurred him to drop that proviso. Of course, we all know of his inability to accept the lack of "realism" implied by quantum mechanics, which got the mathematics right but dispensed with certain givens of phenomenal realism. To this end, we note that he once said that he favored the idea that general relativity's mathematics gave correct answers without accepting the notion that space and time were "really" curved.
Newton had the same problem: There was, to him, an unsatisfactory physical intuition for action at a distance. Some argue that this difficulty has been resolved through the use of "fields," which act as media for wave motion. The electromagnetic field is invoked as a replacement for the ether that Einstein ejected from physics as a useless concept. Still, Einstein saw that the field model was intuitively unsatisfactory.
As demonstrated by the "philosophical" issues raised by quantum theory, the problem is the quantization of energies needed to account for chains of causation. When the energy reaches the quantum level, there are "gaps" in the chain. Hence the issue of causation can't easily be dismissed as a problem of "metaphysics" but is in truth a very important area of discussion on what constitutes "good physics."
One can easily visualize pushing an object, but it is impossible to visualize pulling an object. In everyday experience, when one "pulls," one is in fact pushing. Yet, at the particle level, push and pull are complementary properties associated with charge sign. This fact is now sufficiently familiar as not to seem occult or noumenal. Action at a distance doesn't seem so mysterious, especially if we invoke fields, which are easy enough to describe mathematically, but does anyone really know what a field is? The idea that gravitation is a macro-effect from the actions of gravitons may one day enhance our understanding of nature. But that doesn't mean we really know what's going on at the graviton level.
Gott (61), for example, is representative of numerous physicists who see time as implying many strange possibilities. And Goedel had already argued in the 1940s that time must not exist at all, implying it is some sort of illusion. Goedel had found a solution to Einstein's field equations of general relativity for a rotating universe in which closed time loops exist, meaning a rocket might travel far enough to find itself in its past. Einstein shrugged off this finding of his good friend, arguing that it does not represent physical reality. But Goedel countered that if such a solution exists at all, then time cannot be what we take it to be and doesn't actually exist (62).
These days, physicists are quite willing to entertain multiple dimension theories of cosmology, as in the many dimensions of string theory and M theory.
We have Penrose's cyclic theory of the cosmos (63), which differs from previous Big Bang-Big Crunch cyclic models. Another idea comes from Paul J. Steinhardt, who proposes an "ekpyrotic universe" model. He writes that his model is based on the idea that our hot big bang universe was formed from the collision of two three-dimensional worlds moving along a hidden, extra dimension. "The two three-dimensional worlds collide and 'stick,' the kinetic energy in the collision is converted to quarks, electrons, photons, etc., that are confined to move along three dimensions. The resulting temperature is finite, so the hot big bang phase begins without a singularity."
Steinhardt on the ekpyrotic universe
http://wwwphy.princeton.edu/~steinh/npr/
The real point here is that spacetime, whatever it is, is rather strange stuff. If space and time "in the extremes" hold strange properties, should we not be cautious about assigning probabilities based on absolute Newtonian space and equably flowing time? It is not necessarily a safe assumption that what is important "in the extremes" has no relevance locally.
And yet, here we are, experiencing "time," or something. The difficulty of coming to grips with the meaning of time suggests that beyond the phenomenal world of appearances is a noumenal world that operates along the lines of Bohm's implicate order, or -- in his metaphor -- of a "holographic universe."
But time is even more mind-twisting in the arena of quantum phenomena (as discussed in Noumena II, below).
The "anthropic cosmological principle" has been a continuing vexation for cosmologists (64). Why is it that the universe seems to be so acutely fine-tuned to permit and encourage human life? One answer is that perhaps we are in a multiverse, or collection of noninteracting or weakly interacting cosmoses. The apparent miniscule probability that the laws and constants are so well suited for the appearance of humans might be answered by increasing the number and variety of cosmoses and hence increasing the distribution of probabilities for cosmic constants.
The apparent improbability of life is not the only reason physicists have for multiverse conjectures. But our concern here is that physicists have used probabilistic reasoning on a question of the existence of life. This sort of reasoning is strongly reminiscent of Pascal's wager and I would argue that the question is too great for the method of probability analysis. The propensity information is far too iffy, if not altogether zero. Yet, that doesn't mean the problem is without merit. To me, it shows that probability logic cannot be applied universally and that it is perforce incomplete. It is not only technically incomplete in Goedel's sense, it is incomplete because it fundamentally rests on the unknowable.
Paul Davies, in the Guardian, wrote: "The multiverse comes with a lot of baggage, such as an overarching space and time to host all those bangs, a universe-generating mechanism to trigger them, physical fields to populate the universes with material stuff, and a selection of forces to make things happen. Cosmologists embrace these features by envisaging sweeping 'meta-laws' that pervade the multiverse and spawn specific bylaws on a universe-by-universe basis. The meta-laws themselves remain unexplained -- eternal, immutable transcendent entities that just happen to exist and must simply be accepted as given. In that respect the meta-laws have a similar status to an unexplained transcendent god." Davies concludes, "Although cosmology has advanced enormously since the time of Laplace, the situation remains the same: there is no compelling need for a supernatural being or prime mover to start the universe off. But when it comes to the laws that explain the big bang, we are in murkier waters."
Davies on the multiverse
http://www.theguardian.com/commentisfree/belief/2010/sep/04/stephen-hawking-big-bang-gap
Noumena II: Quantum weirdness
The double-slit experiment
The weird results of quantum experiments have been known since the 1920s and are what led Werner Heisenberg to his breakthrough mathematical systemization of quantum mechanics.
An example of quantum weirdness is the double-slit experiment, which can be performed with various elementary particles. Consider the case of photons, in which the intensity of the beam is reduced to the point that only one photon at a time is fired at the screen with the slits. In the case where only one slit is open, the photo-plate detector on the other side of the screen will record basically one spot where the photons that make it through the slit arrive in what one takes to be a straight line from source to detector.
However, when two slits are open the photons are detected at different places on the plate. The positions are not fully predictable, and so are random within constraints. After a sufficient number of detections, the trained observer notices a pattern. The spots are forming a diffraction pattern associated with how one would expect a wave going through the two slits to separate into two subwaves, and simultaneously interact, showing regions of constructive and destructive wave interference. However, the components of the "waves," are the isolated detection events. From this effect, Max Born described the action of the particles in terms of probability amplitudes, or that is, waves of probability.
This is weird because there seems to be no way, in terms of classical causality, for the individual detection events to signal to the incoming photons where they ought to land. It also hints that the concept of time isn't what we typically take it to be. That is, one might interpret this result to mean that once the pattern is noticed, one cannot ascribe separate time units to each photon (which seems to be what Popper, influenced by Landes, was advocating). Rather, it might be argued that after the fact the experiment must be construed as an irreducible whole. This bizarre result occurs whether the particles are fired off at the velocity of light or well below it.
Schroedinger's cat
In 1935, Erwin Schroedinger, proposed what has come to be known as the Schroedinger's cat thought experiment in an attempt to refute the idea that a quantum property in many experimental cases cannot be predicted precisely, but can only be known probabilistically prior to measurement. Exactly what does one mean by measurement? In the last analysis, isn't a measurement an activity of the observer's brain?
To underscore how ludicrous he thought the probability amplitude idea is, Schroedinger gave this scenario: Suppose we place a cat in a box which contains a poison gas pellet rigged to a Geiger counter that measures radioactive decay. The radioactive substance has some suitable half-life, meaning there is a probability that it is detected or not.
Now in the standard view of quantum theory, there is no routine causation that can be accessed that gives an exact time the detection will occur. So then, the property (in this case, the time of the measurement) does not exist prior to detection but exists in some sort of limbo in which the quantum possibilities -- detection at time interval x and non-detection at time interval x -- are conceived of as a wave of probabilities, with the potential outcomes superposed on each other.
So then, demanded Schroedinger, does not this logically require that the cat is neither alive nor dead prior to the elapse of the specified time interval?! Of course, once we open the box, the "wave function collapses" and the cat's condition -- dead or alive -- tells us whether the quantum event has been detected. The cat's condition is just as much of a detection event as a photo-plate showing a bright spot.
Does this not then mean that history must be observer-centric? However, no one was able to find a way out of this dilemma, despite many attempts (see Toward). Einstein conceded that such a model was consistent, but rejected it on philosophical grounds. You don't really suppose the moon is not there when you aren't looking, he said.
The EPR scenario
In fact, also in 1935, Einstein and two coauthors unveiled another attack on quantum weirdness known as the Einstein-Podolsky-Rosen (EPR) thought experiment, in which the authors pointed out that quantum theory implies what Einstein called "spooky action at a distance" that violated C, the velocity of light in a vacuum, which is an anchor of his theory of relativity. Later John Bell found a way to apply a test to see whether statistical correlation would uphold the spooky quantum theory. Experiments by Alain Aspect in the 1980s and by others have confirmed, to the satisfaction of most experts, that quantum "teleportation" occurs.
So we may regard a particle as carrying a potential for some property or state that is only revealed upon detection. That is the experiment "collapses the wave function" in accordance with the property of interest. Curiously, it is possible to "entangle" two particles of the same type at some source. The quantum equations require that each particle carries the complement property of the other particle -- even though one cannot in a proper experiment predict which property will be detected first.
Bohm's version of EPR is easy to follow: An electron has a property called "spin." Just as a screw may rotate left or right, so an electron's spin is given as "up" or "down," which is where it will be detected in a Stern-Gerlach device. There are only two possibilities, because the electron's rotational motion is quantized into halves -- as if the rotation jumps immediately to its mirror position without any transition, just as the Bohr electron has specific discontinuous "shells" around a nucleus.
Concerning electron spin
http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html
So if we entangle electrons at a source and send them in different directions, then, quantum theory declares that if we detect spin "up" at detector A, it is necessarily so that detector B ought to read spin "down."
In that case, as Einstein and his coauthors pointed out, doesn't that mean that the detection at A required a signal to reach B faster than the velocity of light?
For decades, EPR remained a thought experiment only. A difficulty was that detectors and their related measuring equipment tend to be slightly cranky, giving false positives and false negatives. It may be that error correction codes might have reduced the problem, but it wasn't until Bell introduced his statistical inequalities that the possibility arose of conducting actual tests of correlation.
In the early 1980s Aspect arranged photon experiments that tested for Bell's inequalities and made the sensational discovery that the correlation showed that Einstein was wrong and that detection of one property strongly implied that its "co-particle" would detect for the complement property. (We should tip our hats both to Schroedinger and Einstein for the acuity of their thought experiments.) Further, Aspect did experiments in which monitors were arranged so that any signal from one particle to another would necessarily exceed the velocity of light. Even so, the results held.
This property of entanglement is being introduced into computer security regimens because if, say, the NSA or other party is looking at the data stream, the use of some entangled particles can be used to tip off the sender that the stream is being observed.
Hidden variables
John Von Neumann, contradicting Einstein, published a proof that quantum theory was complete in Heisenberg's sense and that "hidden variables" could not be used to devise causal machinery to explain quantum weirdness. Intuitively, one can apprehend this by noting that if one thinks of causes as undetected force vectors, then Planck's constant means that there is a minimum on the amount of force (defined in terms of energy) that can exist insofar as detection or observation. If we think of causes in terms of rows of dominos fanning out and at points interacting, we see there is nothing smaller than the "Planck domino." So there are bound to be gaps in what we think of as "real world causation."
Popper objected to Von Neumann's claim on grounds that after it was made, discoveries occurred in the physics of the nucleus that required "new" variables. Yet if hidden variables are taken to mean the forces of quantum chromodynamics and the other field theories, these have no direct influence on the behaviors of quantum mechanics (now known as quantum field theory). Also, these other theories are likewise subject to quantum weirdness, so if we play this game, we end up with a level where the "variables" run out.
We should note that by "hidden variable," Von Neumann evidently had in mind the materialist viewpoint of scientists like Bohm whose materialism led him to reject the minimalist ideas of the "Copenhagen interpretation" whereby what one could not in principle observe simply doesn't count. Instead, Bohm sought what might be called a pseudo-materialist reality in which hidden variables are operative if one concedes the bi-locality inherent in entanglement. In fact, I tend to agree with Bohm's view of some hidden order, as summarized by his "holographic universe" metaphor. On the other hand, I do not agree that he succeeded in his ambition to draw a sharp boundary between the "real external world" and subjective perception.
Bohm quotes John Archibald Wheeler:
"No phenomenon is a phenomenon until it is an observed phenomenon" so that "the universe does not exist 'out there' independently of all acts of observation. It is in some strange sense a participatory universe. The present choice of mode of observation... should influence what we see about the past... the past is undefined and undefinable without the observation" (65).
"We can agree with Wheeler that no phenomenon is a phenomenon until it is observed, as by definition, a phenomenon is what appears. Therefore it evidently cannot be a phenomenon unless it is the content of an observation," Bohm says, adding, "The key point in an ontological interpretation such as ours is to ask the question as to whether there is an underlying reality that exists independently of observation" (66).
Bohm argues that a "many minds" interpretation of quantum effects removes "many of the difficulties with the interpretation of [Hugh] Everett and [Bryce] DeWitt (67), but requires making a theory of mind basically to account for the phenomena of physics. At present we have no foundations for such a theory..." He goes on to find fault with this idea.
And yet, Bohm sees that "ultimately our overall world view is neither absolutely deterministic nor absolutely indeterministic," adding: "Rather it implies that these two extremes are abstractions which constitute different views or aspects of the overall set of appearances" (68).
So perhaps the thesis of determinism and the antithesis of indeterminism resolve in the synthesis of the noumenal world. In fact, Bohm says observables have no fundamental significance and prefers an entity dubbed a "be-able," again showing his "implicate order" has something in common with our "noumenal world." And yet our conceptualization is at root more radical than is his.
One specialist in relativity theory, Kip S. Thorne (69), has expressed a different take. Is it possible that the spacetime continuum, or spacetime block, is multiply connected? After all if, as relativity holds, a Riemann topology holds for expressing spacetime, then naive Euclidean space is not operative, except vanishingly close to the curvature functions. So in that case, it shouldn't be all that surprising that spacetime might have "holes" connecting one region to another. Where would such wormholes be most plausible? In black holes, Thorne says. By this, the possibility of a "naked singularity" is addressed. The singularity is the point at which Einstein's field equations cease to be operative; the presumed infinitely dense point at the center of mass doesn't exist because the wormhole ensures that the singularity never occurs; it smooths out spacetime (70).
One can see an analog of this by considering a sphere, which is the surface of a ball. A wormhole would be analogous to a straight-line tunnel connecting Berlin and London by bypassing the curvature of the Earth. So on this analogy, one can think of such tunnels connecting different regions of spacetime. The geodesic -- analogous to a great circle on a sphere -- yields the shortest distance between points in Einstein spacetime. But if we posit a manifold, or cosmic framework, of at least five dimensions then one finds shortcuts, topologically, connecting distinct points on the spacetime "surface." Does this accord with physical reality? The answer is not yet in.
Such wormholes could connect different points in time without connecting different regions of space, thereby setting up a time travel scenario, though he is quoted as arguing that his equation precludes time travel paradoxes.
Thorne's ideas on black holes and wormholes
https://en.wikipedia.org/wiki/Kip_Thorne
The standard many-worlds conjecture is an interpretation of quantum mechanics that asserts that a universal wave function represents objective phenomenal reality. So there is no intrinsically random "collapse of the wave function" when a detection occurs. The idea is to be rid of the Schroedinger cat scenario by requiring that in one world the cat is alive and in another it is dead. The observer's world is determined by whether he detects cat dead or cat alive. These worlds are continually unfolding.
The key point here is the attempt to return to a fully deterministic universe, a modern Laplacian clockwork model. However, as the observer is unable to foretell which world he will end up in, his ignorance (stemming from randomness1 and randomness2) is tantamount to intrinsic quantum randomness (randomness3).
In fact, I wonder how much of a gain there is in saying Schroedinger's cat was alive in one world and dead in another prior to observation as opposed to saying the cat was in two superposed states relative to the observer.
On the other hand it seems likely that Hawking favors the notion of a universal wave function because it implies that information represents hard, "external" reality. But even so, the information exists in superposed states as far as a human observer is concerned.
At present, there is no means of calculating which world the cat's observer will find himself in. He can only apply the usual quantum probability methods.
Time-bending implications
What few have understood about Aspect's verification of quantum results is that time itself is subject to quantum weirdness.
A logical result of the entanglement finding is this scenario:
We have two detectors, A which is two meters from the source and B which is one meter distant. You are positioned at detector A and cannot observe B. Detector A goes off and registers, say, spin "down." You know immediately that Detector B must read spin "up" (assuming no equipment-generated error). That is, from your position, the detector at B went off before your detector at A. If you like, you may in principle greatly increase the scale of the distances to the detectors. It makes no difference. B seems to have received a signal before you even looked at A. It's as if time is going backward with respect to B, as far as you are concerned.
Now it is true that a great many physicists agree with Einstein in disdaining such scenarios, and assume that the picture is incomplete. However, incomplete or not, the fact is that the observer's sense of time is stood on its head. And this logical implication is validated by Aspect's results.
Now, let's extend this experimentally doable scenario with a thought experiment reminiscent of Schroedinger's cat. Suppose you have an assistant stationed at detector B, at X kilometers from the source. You are at X/2 kilometers from the source. Your assistant is to record the detection as soon as it goes off, but wait for your call to report the property. As soon as you look at A, you know his property will be the complement of yours. So was he in a superposed state with respect to you? Obnoxious as many find this, the logical outcome, based on the Aspect experiments and quantum rules, is yes.
True, you cannot in relativity theory receive the information from your assistant faster than C, thus presenting the illusion of time linearity. And yet, I suggest, neither time nor our memories are what we suppose them to be.
The amplituhedron
When big particle accelerators were introduced, it was found that Richard Feynman's diagrams, though conceptually useful, were woefully inadequate for calculating actual particle interactions. As a result, physicists have introduced a remarkable calculational tool called the "amplituhedron." This is a topological object that exists in higher-dimensional space. Particles are assumed to follow the rules in this object, and not the rules of mechanistic or pseudo-mechanistic and continuous Newtonian and Einsteinian spacetime.
Specifically, it was found that the scattering amplitude equals the volume of this object. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron.
It has been suggested that the amplituhedron, or a similar geometric object, could help resolve the perplexing lack of commensurability of particle theory and relativity theory by removing two deeply rooted principles of physics: locality and unitarity.
“Both are hard-wired in the usual way we think about things,” according to Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton. “Both are suspect.”
Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.
At this point I interject that an axiom of nearly all probability theories is that the probabilities of the outcome set must equal 1. So if, at a fundamental, noumenal level, this axiom does not hold, what does this bode for the whole concept of probability? At the very least, we sense some sort of nonlinearity here. (At this point we must acknowledge that quantum physicists have for decades used negative probabilities with respect to the situation before the "collapse of the wave function," but "negative unity" is preserved.)
Mark Burgin on negative probabilities
http://arxiv.org/ftp/arxiv/papers/1008/1008.1287.pdf
Wikipedia article on negative probabilities
https://en.wikipedia.org/wiki/Negative_probability
According to the article linked below, scientists have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. This amplituhedron's volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, are conceived of as existing on the faces of this master structure.
“They are very powerful calculational techniques, but they are also incredibly suggestive,” said one scientist. “They suggest that thinking in terms of space-time was not the right way of going about this.”
“We can’t rely on the usual familiar quantum mechanical spacetime pictures of describing physics,” said Arkani-Hamed. “We have to learn new ways of talking about it. This work is a baby step in that direction.”
So it indeed looks as though time and space are in fact some sort of illusion.
In my estimate, the amplituhedron is a means of detecting the noumenal world that is beyond the world of appearances or phenomena. Quantum weirdness implies that interactions are occurring in a way and place that do not obey our typical perceptual conceits. It's as if, in our usual perceptual state, we are encountering the "shadows" of "projections" from another "manifold."
Simons Foundation article on the amplituhedron
https://www.simonsfoundation.org/quanta/20130917-a-jewel-at-the-heart-of-quantum-physics/
The spacetime block of relativity theory likewise suggests that there is a realm that transcends ordinary energy, time and motion.
Zeno's paradox returns
Motion is in the eye of the beholder.
When an object is lifted to height n, it has a specific potential energy definable in terms of Planck's energy constant. Hence, only the potential energies associated with multiples of Planck's constant are permitted. In that case, only heights associated with those potential energies are permitted. When the object is released and falls, its kinetic energy increases with the acceleration. But the rule that only multiples of Planck's constant are permitted means that there is a finite number of transition heights before the object hits the ground. So what happens between quantum height y and quantum height y - 1?
No doubt Zeno would be delighted with the answer:
The macro-object can't cross these quantum "barriers" via what we think of as motion. The macro-object makes a set of quantum jumps across each "barrier," exactly like electrons in an atom jumping from one orbital probability shell to another.
Here we have a clear-cut instance of the "macro-world" deceiving us, when in fact "motion" must occur in quantum jumps. This is important for not only what it says about motion, but also because it shows that the macro-world is highly -- not minimally -- interactive with the "quantum world." Or that is, that both are highly interactive with some noumenal world that can only be apprehended indirectly.
Even in classical physics, notes Popper in his attack on the Copenhagen interpretation, if acceleration is measured too finely, one finds one gets an indeterminate value, as in a = 0/0 (71).
Even on a cosmic scale, quantum weirdness is logically required.
Cosmic weirdness
Suppose we had a theory of everything (ToE) algorithm. Then at proper time ta we will be able to get a snapshot of the ToE waveform -- obtained from the the evolving net ToE vector -- from ta to tb. It is pointless to decompose the waveform below the threshold set by Planck's constant. So the discrete superpositions of the ToE, which might be used to describe the evolution of the cosmos, cannot be reduced to some continuum level. If they could be reduced infinitely, then the cosmic waveform would in effect represent a single history. But, the fact that the waveform is composed of quantum harmonics means that more than one history (and future) is in superposition.
In this respect, we see that quantum theory requires "many universes," though not necessarily in the sense of Hugh Everett or of those who posit "bubble" universes.
Many will object that what we have is simply an interpretation of the meaning of quantum theory. But, I reply that once hidden variables are eliminated, and granted the success of the Aspect experiments, quantum weirdness logically follows from quantum theory.
EPR, action at a distance, special relativity and the fuzziness of motion and time and of the cosmos itself, all suggest that our reality process only reflects but cannot represent noumenal reality. That is, what we visualize and see is not what's actually behind what we visualize and see. Quantum theory gives us some insight into how noumena are mapped into three- and four-dimensional phenomena, but much remains uncharted.
So if phenomenon A correlates with phenomenon B, we may be able to find some algorithm that predicts this and other outcomes with a probability near 1. But if A and B are phenomena with a relation determined in a noumenal "world," then what is to prevent all sorts of oddities that make no sense to phenomenalist physicists? Answer: If so, it might be difficult to plumb such a world, just as a shadow only discloses some information about the object between the projection and the light source.
Physicists are, I would say, somewhat more likely to accept a nonlinearity in causality than are scientists in general. For example, Brian Josephson, a Nobel laureate in physics, favors a radical overhaul of physical knowledge by taking into account such peculiarities as outlined by John A. Wheeler, who proposes a "participatory universe." Josephson believes C.S. Peirce's semiotics combined with a new approach to biology may help resolve the impasses of physics, such as the evident incommensurability of the standard model of particle physics with the general theory of relativity.
Josephson on a 'participatory universe'
http://arxiv.org/pdf/1108.4860v4.pdf
And Max Tegmark argues that the cosmos has virtually zero algorithmic information content, despite the assumption that "an accurate description of the state of the universe appears to require a mind-bogglingly large and perhaps even infinite amount of information, even if we restrict our attention to a small subsystem such as a rabbit."
But, he says that if the Schroedinger equation is universally valid, then "decoherence together with the standard chaotic behavior of certain non-linear systems will make the universe appear extremely complex to any self-aware subsets that happen to inhabit it now, even if it was in a quite simple state shortly after the big bang."
Tegmark's home page
http://space.mit.edu/home/tegmark/home.html
Roger Penrose has long been interested in the "huge gap" in the understanding of physics posed by the Schroedinger's cat scenario. He sees this issue as strongly suggestive of a quantum influence in consciousness -- consciousness being crucial to the collapse of Schroedinger's wave function.
He and Stuart Hameroff, a biologist, propose that microtubules in the brain are where the relevant quantum activities occur.
Even though Penrose is attempting to expunge the problem of superposed realities from physics with his novel proposal, the point to notice here is that he argues that the quantum enigma is indicative of something beyond current physical knowledge that must be taken into account. The conscious mind, he claims, is not at root doing routine calculations. That chore is handled by the unconscious autonomic systems, he says.
In our terms, he is pointing to the existence of a noumenal world that does not operate in the routine "cause-effect" mode of a calculational model.
The 'Orch OR' model for consciousness
http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html
Penrose talk on quantum activity in consciousness
https://www.youtube.com/watch?v=3WXTX0IUaOg
On the other hand, there has always been a strong belief in non-illusional reality among physicists. We have Einstein and Popper as notable examples. Popper was greatly influenced by Alfred Landes whose strong opposition to the Copenhagen interpretation is spelled out in books published well before Aspect's experiments had confirmed bilocality to the satisfaction of most physicists (72).
Yet, the approach of many probability theorists has been to ignore these sorts of implications. Carnap's attitude is typical. In his Logical Foundations of Probability (73), Carnap mentions a discussion by James Jeans of the probability waves of quantum mechanics, which Jeans characterizes as "waves of knowledge," implying "a pronounced step in the direction of mentalism" (74).
But Carnap breezes right past Jeans's point, an omission that I hazard to guess calls into question the logical foundation of Carnap's whole system -- though I note that I have not attempted to plow through the dense forest of mathematical logic symbols in Carnap's book.
I have tried to address some of the issues in need of exploration in my paper Toward, which discusses the reality construction process and its implications. Many worlds of probability is intended as a companion to that paper. Please see
Appendix D. Toward a signal model of perception
https://manyworlds784.blogspot.com/p/contents-overview-philosophy-time-and.html
We have two opposing tendencies: On the one hand, our experiments require that detections occur according to a frequency-style "probability wave," in which the probability of a detection is constrained by the square of the wave amplitude. If numerous trials are done, the law of large numbers will come into effect in, say, the correlation found in an Aspect experiment. So our sense is that quantum probabilities are intrinsic, and that quantum randomness is fundamental. That is, quantum propensities verify an "objective external reality."
On the other hand, the logical implication of such randomness -- as demonstrated in the double-slit and Aspect experiments -- is that what we call reality must be more subjective than usually supposed, that various histories (and hence futures) are superpositions of potential outcomes and do not actualize until observation in the form of cognitive focus (which may be, for all we know, partly unconscious). So one's mental state and train of thought must influence -- after the manner of the science fiction film The Matrix -- one's perceived world. This is the asymmetric three-sink vector field problem. Where will the fixed point (as in center of mass or gravity) be found?
So then assignment of probabilities may seem to make sense, but only if one neglects the influence of one's mind on the perceived outcomes. As long as you stay in your assumed "world," probability calculations may work well enough -- as you inhabit a world that "goes in that direction" (where many people micromanage "the future" in terms of probabilities).
This logical outcome of course is what Popper and many others have objected to. But, despite a great many attempts to counter this line of thought, none have succeeded (as I argue in Toward). At the end of his career, Popper, faced with the Aspect results, was reduced to a statement of faith: even if bi-locality holds, his belief in classical realism would not be shaken.
He points to the horror of Hiroshima and Nagasaki and the "real suffering" of the victims as a moral reason to uphold his objectivism. That is, he was arguing that we must not trivialize their pain by saying our perception of what happened is a consequence of some sort of illusion (75).
Einstein, it seems clear, implicitly believed in a phenomenal world only, though his own progress with relativity required the ditching of such seemingly necessary phenomena as the ether as mediating not only light waves but gravitational waves. In his more mature period, he conceded that the spacetime continuum devised by himself and Minkowski was in effect an ether. My estimate is that Einstein mildly conceded a noumenal world, but resisted a strong dependence on such a concept. Bohm, who favored a form of "realism," settled on a noumenal world with the analogies of a holographic universe and of the "implicate" order shown by an ink blob that unravels when spun in a viscous fluid and pretty much exactly is restored to its original state when its spin is reversed. Phenomena are observed because of some noumenal relation.
So we say there is some physical process going on which we can only capture in part. By probability wave, we mean that we may use a wave model to represent what we can know about the unfolding of the process. The probability wave on the one hand implies an objective reality but on the other a reality unfolding in a feedback loop within one's brain-mind.
Waves imply change. But whatever is going on in some noumenal realm is partly predictable in terms of probability of observed properties. That is, the probability wave function is a means of partly predicting event observations but we cannot say it models the noumenal process precisely or at all.
As Jeans put it:
"Heisenberg attacked the enigma of the physical universe by giving up the main enigma -- the nature of the physical universe -- as insoluble, and concentrating on the minor puzzle of co-ordinating our observations of the universe. Thus it is not surprising that the wave picture which finally emerged should prove to be concerned solely with our knowledge of the universe as obtained from our observations (76)."
This pragmatic idea of ignoring the noumenal, or as some might prefer, sub-phenomenal, world has been largely adopted by practicing scientists, who also adopt a background assumption that discussion of interpretation is philosophy and hence outside science. They accept Popper's view that there exists a line dividing science from meta-science and similarly his view that interpretations are not falsifiable. And yet, a counterexample to that belief is the fact that Einstein's interpretation barring bi-localism was falsified, in the minds of most physicists, by the experiments of Aspect.
Go to Chapter 10 HERE.
No comments:
Post a Comment