Consciousness In The Universe An Updated Review Of The “Orch Or” Theory
book chapter in Biophysics of Consciousness: A Foundational Approach
Stuart Hameroff, 2016
To download a pdf, please click here.
The nature of consciousness, the mechanism by which it occurs in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990’s that consciousness depends on biologically “orchestrated” coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity, and that the continuous Schrödinger evolution of each such process terminates in accordance with the specific Diósi–Penrose (DP) scheme of “objective reduction” (“OR”) of the quantum state. This orchestrated OR activity (“Orch OR”) is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space–time geometry, so Orch OR suggests that there is a connection between the brain’s biomolecular processes and the basic structure of the universe. Here we review Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We also introduce novel suggestions of (1) beat frequencies of faster Orch OR microtubule dynamics (e.g. megahertz) as a possible source of the observed electroencephalographic (“EEG”) correlates of consciousness and (2) that OR played a key role in life’s evolution. We conclude that consciousness plays an intrinsic role in the universe.
1. Introduction: Consciousness in the Universe
Consciousness implies awareness: subjective, phenomenal experience of internal and external worlds. Consciousness also implies a sense of self, feelings, choice, control of voluntary behavior, memory, thought, language, and (e.g., when we close our eyes, or meditate) internally generated images and geometric patterns. But what consciousness actually is remains unknown. Our views of reality, of the universe, of ourselves depend on consciousness. Consciousness defines our existence.
Three general possibilities regarding the origin and place of consciousness in the universe have been commonly expressed.
- Consciousness is not an independent quality but arose, in terms of conventional physical processes, as a natural evolutionary consequence of the biological adaptation of brains and nervous systems. This prevalent scientific view is that consciousness emerged as a property of complex biological computation during the course of evolution. Opinions vary as to when, where and how consciousness appeared, e.g., only recently in humans, or earlier in lower organisms. Consciousness as an evolutionary adaptation is commonly assumed to be epiphenomenal [(i.e., a secondary effect without independent influence (Dennett, 1991; Dennett & Kinsbourne, 1991; Wegner, 2002)], and also illusory [(largely constructing reality, rather than perceiving it (Chalmers, 2012)]. Nonetheless, consciousness is frequently argued to confer beneficial advantages to species (Dennett, 1995). Overall, in this view, consciousness is not an intrinsic feature of the universe.
- Consciousness is a separate (“spiritual”) quality, distinct from physical actions and not controlled by physical laws, that has always been in the universe. “Descartes’ dualism,” religous viewpoints and other spiritual approaches assume consciousness has been in the universe all along, e.g., as the “ground of being,” “creator” or component of an omnipresent “God” (Chopra, 2001). In this view, consciousness can causally influence physical matter and human behavior, but has no basis or description in science (Nadaeu & Kafatos; 2001, Kant, 1998). In another approach, panpsychism attributes consciousness to all matter, but without scientific identity or causal influence. Idealism contends consciousness is all that exists, the material world (and science) being an illusion (Berkeley, 1975). In all these views, consciousness lies outside science.
- Consciousness results from discrete physical events; such events have always existed in the universe as non-cognitive, proto-conscious events, these acting as part of precise physical laws not yet fully understood. Biology evolved a mechanism to orchestrate such events and to couple them to neuronal activity, resulting in meaningful, cognitive, conscious moments and thence also to causal control of behavior. These events are proposed specifically to be moments of quantum state reduction (intrinsic quantum “self-measurement”). Such events need not necessarily be taken as part of current theories of the laws of the universe, but should ultimately be scientifically describable. This is basically the type of view put forward, in very general terms, by the philosopher Whitehead (1929, 1933) and also fleshed out in a scientific framework in the Penrose–Hameroff theory of “orchestrated objective reduction” (“Orch OR”) (Penrose & Hameroff, 1995; Hameroff & Penrose, 1996a, 1996b, 2014; Hameroff, 1998a, 1998b; Penrose & Hameroff, 2011). In the Orch OR theory, these conscious events are terminations of quantum computations in brain microtubules reduced by Diósi–Penrose (DP) “objective reduction” (“OR”), and having experiential qualities. In this view, consciousness is an intrinsic feature of the action of the universe.
In summary, we have:
- Science/Materialism, with consciousness having no distinctive role.
- Dualism/Spirituality, with consciousness (etc.) being outside science.
- Science, with consciousness as an essential ingredient of physical laws not yet fully understood.
2. Consciousness, Computation and Brain Activities
2.1. Unexplained features of consciousness
How does the brain produce consciousness? Most scientists and philosophers view consciousness as an emergent property of complex computation among “integrate-and-fire” brain neurons which interconnect and switch at chemically mediated synapses. However, the mechanism by which such neuronal computation may produce conscious experience remains unknown (Koch, 2004; Chalmers, 1996). Specific unexplained features of consciousness include the following:
The “hard problem”: What is the nature of phenomenal experience, and what distinguishes conscious from non-conscious cognition? Perception and behavior may be accompanied or driven by phenomenal conscious awareness, experinces or subjective feelings, composed of what philosophers call “qualia” (Chalmers, 1996). However, perception and behavior may at other times be unaccompanied by consciousness. We could have evolved as full-time non-conscious “zombies” performing complex “auto-pilot” behaviors without conscious awareness. How and why do we have phenomenal consciousness, an “inner life” of subjective experience?
‘Binding’: Disparate sensory inputs are processed in different brain regions, at slightly different times, and yet are bound together into unified conscious content “binding” (van der Malsburg, 1999). How is conscious content bound together?
Synchrony: Neuronal membrane polarization states may be precisely synchronized over large regions of brain (Fries et al., 2002), and also propagate through brain regions as synchronized zones (Hameroff, 2010). Does precise synchrony require electrical synapses (“gap junctions”) and/or quantum entanglement? Does synchrony reflect discrete, unified conscious moments?
‘Non-computability’ and causal agency: As shown by Gödel’s theorem, Penrose (1989, 1994) described how the mental quality of “understanding” cannot be encapsulated by any computational system and must derive from some “non-computable” effect. Moreover, the neurocomputational approach to volition, where algorithmic computation completely determines all thought processes, appears to preclude any possibility for independent causal agency, or free will. Something else is needed. What non-computable factor may occur in the brain?
Cognitive behaviors of single cell organisms: Protozoans like Physarum can escape mazes and solve problems, and Paramecium can swim, find food and mates, learn, remember and have sex, all without synaptic connections (Nakagaki et al., 2000; Adamatzky 2012). They are not part of a network. How do single cells manifest intelligent behavior?
2.2. Conscious moments and computation
Consciousness has often been argued to be a sequence of discrete moments. James (1890) described the “specious present, the short duration of which we are immediately and incessantly sensible” (though James was vague about duration, and also described a continual “stream of consciousness”). The “perceptual moment” theory of Stroud (1956) described consciousness as a series of discrete events, like sequential frames of a movie (modern film and video present 24 to 72 frames per second, 24 to 72 Hertz, “Hz”). Consciousness is also seen as sequences of discrete events in Buddhism, trained meditators describing distinct “flickerings” in their experience of pure undifferentiated awareness (Tart, 1995). Buddhist texts portray consciousness as “momentary collections of mental phenomena,” and as “distinct, unconnected and impermanent moments which perish as soon as they arise.” Buddhist writings even quantify the frequency of conscious moments. For example, the Sarvaastivaadins (Von Rospatt, 1995) described 6,480,000 ‘moments’ in 24 hours (an average of one ‘moment’ per 13.3 ms, 75 Hz), and some Chinese Buddhism as one “thought” per 20 ms (50 Hz). The best measurable correlation of consciousness through modern science is gamma synchrony EEG, 30 to 90 Hz coherent neuronal membrane activity occurring across various synchronized brain regions. Slower periods, e.g., 4 to 7 Hz theta frequency, with nested gamma waves could correspond to saccades and visual gestalts (Woolf & Hameroff, 2001; Van Rullen & Koch, 2003). Thus, we may argue that consciousness consists of discrete events at varying frequencies occurring across brain regions, for example 40 conscious moments per second. What are these conscious moments?
The overarching presumption in modern science and philosophy is that consciousness emerges from complex synaptic computation among brain neurons acting as fundamental information units. In digital computers, discrete voltage levels represent information units (e.g., “bits”) in silicon logic gates. McCulloch & Pitts (1943) proposed such gates as integrate-and-fire artificial neurons, leading to “perceptrons” (Rosenblatt, 1962) and other types of “artificial neural networks” capable of learning and self-organized behavior. Similarly, according to the standard “Hodgkin–Huxley” model (Hodgkin & Huxley, 1952), biological neurons are “integrate-and-fire” threshold logic devices in which multiple branched dendrites and a cell body (soma) receive and integrate synaptic inputs as membrane potentials (Fig. 1). According to Hodgkin–Huxley, the integrated potential is then compared to a threshold potential at the axon hillock, or axon initiation segment (AIS). When AIS threshold is reached by the integrated potential, an all-or-none action potential “firing,” or “spike” is triggered as output, conveyed along the axon to the next synapse. Cognitive networks of Hodgkin–Huxley neurons connected by variable strength synapses (Hebb, 1949) can self-organize and learn, their axonal firing outputs controlling downstream activity and behavior.
How does consciousness arise from neurocomputation? Some contend that consciousness emerges from computational complexity due to firings and other brain electrical activity (Scott, 1995; Tononi, 2004). However neither the specific neuronal activities contributing to complexity, nor any predicted complexity threshold for emergence of consciousness have been put forth. Nor is there a sense of how complexity per se could give rise to discrete conscious moments. Others contend large scale, cooperative axonal firing outputs, “volleys,” or “explosions” produce consciousness (Koch, 2004; Malach, 2007). But coherent axonal firings are in all cases preceded and caused by synchronized dendritic/somatic integrations. Indeed, gamma synchrony EEG, the best correlate of consciousness, is generated not by axonal firings, but by dendritic and somatic integration potentials. Accordingly, some suggest consciousness primarily involves neuronal dendrites and cell bodies/soma, i.e., in integration phases of “integrate-and-fire” sequences (Pribram, 1991; Eccles, 1992; Hameroff, 2012). Integration implies reduction of uncertainty, merging and consolidating multiple possibilities to one, e.g., selecting conscious perceptions and actions.
2.3. Consciousness and dendritic integration
Neuronal integration is commonly approximated as linear summation of dendritic/somatic membrane potentials [(Fig. 2(a)]. However, actual integration is not passive, actively involving complex processing (Shepherd, 1996; Sourdet & Debanne, 1999; Poirazi & Mel, 2001). Dendritic–somatic membranes generate local field potentials (“LFPs”) which give rise to EEG, including coherent gamma synchrony, the best measurable neural correlate of consciousness (“NCC”) (Gray & Singer, 1989; Crick, 1990). Anesthetic molecules selectively erase consciousness, acting on post-synaptic dendrites and soma, with little or no effects on axonal firing capabilities. Arguably, dendritic/somatic integration is most closely related to consciousness, with axonal firings serving to convey outputs of conscious (or non-conscious) processes to control behavior. But even complex, active integration in Hodgkin–Huxey neurons would, apart from an entirely probabilistic (random) input, be completely algorithmic and deterministic, leaving no apparent place for consciousness.
However, neurons involved in conscious brain processes apparently deviate from Hodgkin–Huxley. Naundorf et al. (2006) showed that firing threshold at the AIS in cortical neurons in brains of awake animals (compared to neurons in vitro) vary significantly spike-to-spike (Fig. 2(b)). Some factor in addition to inputs, synaptic strengths and the integrated AIS membrane potential apparently contributes to effective integration controlling firing, or not firing, ultimately influencing behavior. This unknown end-integration, pre-firing factor is perfectly positioned for conscious perception and action. What could it involve?
One possible firing-modulating factor comes from lateral connections among neurons via gap junctions, or electrical synapses (Fig. 1). Gap junctions are protein complexes which fuse adjacent neurons and synchronize their membrane polarization states, e.g. in gamma synchrony EEG (Dermietzel, 1998; Draguhn et al., 1998; Galaretta & Hestrin, 2001; Bennett & Zukin, 2004; Fukuda & Kosaka, 2000; Traub et al., 2002). Gap junction-connected cells have fused, synchronized membranes, and also continuous intracellular volumes, as open gap junctions between cells act like doors between adjacent rooms. Neurons connected by dendritic– dendritic gap junctions have synchronized LFPs (giving rise to the EEG) in integration phase, but not necessarily synchronous axonal firing outputs. Gap junction-synchronized dendritic networks can thus collectively integrate inputs, enhancing computational capabilities (Hameroff, 2010). However, membrane-based modulations via gap junction connections would be reflected in the integrated membrane potential, and unable to account for threshold variability seen by Naundorf et al. (2006). Finer scale processes from within neurons (and conveyed from interiors of adjacent neurons via open gap junctions) could alter firing threshold without changing membrane potentials, and serve as a potential site and mechanism for consciousness.
Finer scale intra-cellular processing, e.g., derived from cytoskeletal structures, are the means by which single-cell organisms perform cognitive functions without synaptic inputs. Observing intelligent actions of unicellular creatures, famed neuroscientist Charles Sherrington said “of nerve there is no trace, but perhaps the cytoskeleton might serve.” Neurons have a rich and uniquely organized cytoskeleton, the major components being microtubules (Sherrington, 1957).
3. A Finer Scale of Neuronal Information Processing
Interiors of eukaryotic cells are organized and shaped by their cytoskeleton, a scaffolding-like protein network of microtubules, microtubule-associatied proteins (MAPs), actin and intermediate filaments (Tuszynski et al., 1995). Microtubules (“MTs,” Fig. 3) are cylindrical polymers 25 nanometers (nm = 10–9 meter) in diameter, and of variable length, from a few hundred nanometers apparently up to meters in long nerve axons. MTs self-assemble from peanut-shaped “tubulin” proteins, each tubulin being a dimer composed of alpha and beta monomers, with a dipole giving MTs ferroelectric properties. In MTs, tubulins are usually arranged in 13 longitudinal protofilaments whose lateral connections result in two types of hexagonal lattices (A-lattice and B-lattice) (Amos & Klug, 1974), the proto-filaments being shifted in relation to their neighbors, slightly differently in each direction, resulting in differing relationships between each tubulin and its six nearest neighbors. Helical pathways following along neighbour-ing tubulin dimers in the A-lattice repeat every five and eight tubulins, respectively, down any protofilament, and following along neighboring tubulin monomers repeat every three monomers, after winding twice around the MT. Thus helical winding pathways in the MT A-lattice follow the Fibonacci series (3, 5, 8…) found widely in nature.
Along with actin and other cytoskeletal structures, MTs self-assemble to establish cell shape, direct growth and organize functions including those of brain neurons. Various types of MAPs bind at specific lattice sites, and bridge to other MTs, defining cell architecture like girders and beams in a building. Another type of MAP is tau, whose displacement from MTs results in neurofibrillary tangles and the cognitive dysfunction of Alzheimer’s disease (Brunden et al., 2011; Craddock et al., 2012; Rasmussen et al., 1990). Other MAPs include motor proteins (dynein, kinesin) which move rapidly along MTs, transporting cargo molecules to specific synapses and locations. Tau proteins bound to MTs apparently serve as traffic signals, determining where motor proteins deliver their cargo (Dixit et al., 2008). Thus, specific placement of tau on MT lattices appears to reflect encoded information governing synaptic plasticity.
MTs are particularly prevalent in neurons (109 tubulins/neuron), and are uniquely stable. Non-neuronal cells undergo repeated cycles of cell division, or mitosis, for which MTs disassemble and re-assemble as mitotic spindles which separate chromosomes, establish cell polarity and architecture, then depolymerize for tubulins and MTs to be re-utilized for cell function. However neurons, once formed, do not divide, and so neuronal MTs can remain assembled indefinitely. Dendritic–somatic MTs are unique in other ways. MTs in axons (and non-neuronal cells) are arrayed radially, extending continuously (with the same polarity) from the centrosome near the nucleus, outward toward the cell membrane. However MTs in dendrites and cell bodies are interrupted, of mixed polarity (Fig. 1), and arranged in local recursive networks suitable for learning and information processing (Dustin, 1985). Finally, MTs in other cells can assemble at one end and dis-assemble at the other (“treadmilling”), or grow and then abruptly dis-assemble (“dynamic instability” or “MT catastrophes” (Guillard et al., 1998). However dendritic–somatic MTs are capped by special MAPs which prevent de-polymerization (Mitchison & Kirschner, 1984), and are thus especially stable and suitable for long-term information encoding and memory (Craddock et al., 2012a)
3.2. Microtubule information processing
After Sherrington’s broad observation in 1957 about the cytoskeleton as a cellular nervous system, Atema (1973) proposed that tubulin conformational changes propagate as signals along microtubules. Hameroff and Watt (1982) suggested that distinct tubulin dipoles and conformational states — mechanical changes in protein shape — could represent information, with MT lattices acting as two-dimensional Boolean switch-ing matrices with input/output computation occurring via MAPs. MT information processing has also been viewed in the context of cellular (“molecular”) automata (“microtubule automata,” Fig. 3) in which tubulin dipole and conformational states interact with neighboring tubulin states in hexagonal MT lattices by dipole couplings, synchronized by biomolecular coherence as proposed by Fröhlich (Fröhlich, 1968, 1970, 1975; Smith et al., 1984; Hameroff, 2006a).
Protein conformational changes occur at multiple scales (Karplus & McCammon, 1983), e.g., 10–6 sec to 10–11 sec transitions. Coordinated movements of the protein’s atomic nuclei, far more massive than electrons, require energy and generate heat. Early versions of Orch OR portrayed tubulin states as alternate mechanical conformations, coupled to, or driven by London force dipoles in non-polar hydrophobic pockets (Hameroff & Penrose, 1996a, 1996b; Hameroff, 1998a, 1998b; Penrose & Hameroff, 2011). However, recent Orch OR papers do not make use of such large conformational changes, depending instead on tubulin dipole or spin states alone to represent information (Sec. 3.3 below).
Within MTs, each tubulin may differ from among its neighbors due to genetic variability, post-translational modifications (Janke & Kneussel, 2010; Hameroff, 2007), phosphorylation states, binding of ligands and MAPs, and moment-to-moment conformational and/or dipole or spin state transitions. Synaptic inputs can register information in dendritic– somatic MTs in brain neurons by metabotropic receptors, MAP2, and CaMKII, a hexagonal holoenzyme able to convey calcium ion influx to MT lattices by phosphorylation (Fig. 4, (Craddock et al., 2012a). Thus, tubulins in MTs can each exist in multiple possible states, perhaps dozens or more. However for simplicity, models of MT automata consider only two alternative tubulin states, i.e., binary “bits.”
Another potential factor arises from the specific geometry of MT lattices in which helical winding pathways (in the A-lattice) repeat according to the Fibonacci sequence (3, 5, 8…) and may correlate with conduction pathways (Hameroff, et al., 2002). Dipoles or spin states aligned along such pathways may be favored (and coupled to MT mechanical vibrations) thus influencing MT automata computation.
MT automata based on tubulin dipoles in hexagonal lattices show high capacity integration and learning (Rasmussen et al., 1990). Assuming 109 binary tubulins per neuron switching at 10 megahertz (107) gives a potential MT-based capacity of 1016 operations per second per neuron. Conventional neuronal-level approaches based on axonal firings and synaptic transmissions (1011 neurons/brain, 103 synapses/neuron, 102 transmissions/s/synapse) give the same 1016 operations per second for the entire brain! MT-based information processing offers a huge potential increase in brain capacity (Hameroff, 2007).
How would MT processes be “read out” to influence neuronal and network activities in the brain? First, as previously mentioned, MT processing during dendritic–somatic integration can influence axonal firings to implement behavior. Second, MT processes may directly result in conscious awareness. Third, MT processes can regulate synaptic plasticity, e.g., as tracks and guides for motor proteins (dynein and kine-sin) transporting synaptic precursors from cell body to distal synapses. The guidance mechanism in choosing the proper path is unknown, but seems to involve placement of the MAP tau at specific sites on MT lattices. In Alzheimer’s disease, tau is hyperphosphorylated and dislodged from destabilized MTs, forming neurofibrillary tangles which correlate with memory loss (Matsuyama & Jarvik, 1989; Brinden et al., 2011; Craddock et al., 2012). Fourth, tubulin states can encode binding sites not only for tau, but also structural MAPs determining cytoskeletal scaffolding and thus directly regulate neuronal structure and synaptic formation. Finally, MT information processing may be directly related to activities at the levels of neurons and neuronal networks through something of the nature of scale-invariant dynamics. Several lines of evidence point to fractal-like (1/f) self-similarity over different spatiotemporal scales in brain dynamics and structure (He et al., 2010; Kitzbichfer et al. 2009). Scale-invariance is generally considered at scale levels of neurons and higher-level neuronal networks, but may extend downward in size (and higher in frequency) to intra-neuronal MT dynamics, spanning 4 or 5 scale levels or more, each level separated by several orders of magnitude. MT information processing depends on interactive states of individual tubulin proteins. What are those states, and how are they governed?
3.3. Tubulin dipoles and anesthesia
Tubulin, like other proteins, is composed of a heterogeneous group of amino acid residues connected to a peptide backbone. The residues include both water-soluble polar, and water-insoluble non-polar groups, the latter including “aromatic” amino acids (phenylalanine, tyrosine and tryptophan) with “π” orbital electron resonance clouds in phenyl and indole rings. π orbital clouds are composed of electrons able to delocalize across a spatial region. Like oil separating from water, non-polar electron clouds coalesce during protein folding to form isolated water-excluding “hydrophobic regions” within proteins with particular (“oily,” “lipid-like”) solubility. Driving the folding are non-polar, but highly polarizable orbital electron cloud dipoles which couple by van der Waals London forces (instantaneous dipole-induced dipole attractions between electron clouds) (Voet & Voet, 1995).
Within intra-protein hydrophobic regions, anesthetic gas molecules bind by London force dipole couplings, and thereby (somehow) exert their effects on consciousness (Craddock et al., 2012b, 2015; Hameroff, 2006a; Hameroff, 1998c; Hameroff et al., 1982; Hameroff & Watt, 1983). Historically, views of anesthetic action have focused on neuronal membrane proteins, but actual evidence (e.g., from genomics and proteomics (Xi et al., 2004; Pan et al., 2007) points to anesthetic action in microtubules. In the most definitive anesthetic experiment yet performed, Emerson et al. (2013) used fluorescent anthracene as an anesthetic in tadpoles, and showed cessation of tadpole behavior that occurs specifically via anthracene binding in tadpole brain microtubules. Despite prevailing assumptions, actual evidence supports anesthetic action on microtubules.
Tubulin (Fig. 5) contains 32 aromatic (phenyl and indole) amino acid rings with π electron resonance clouds, most within a Forster resonance transfer distance of 1 to 2 nm (Craddock et al., 2012b). Resonance rings align along grooves which traverse tubulin, and appear to meet those in neighbor tubulins along helical lattice pathways (Fig. 6A). Simulation of anesthetic molecules (Fig. 5, red spheres) shows binding in a hydrophobic channel aligned with the five- and eight-start helical winding pathways in the microtubule A-lattice.
Figure 6B shows collective dipole couplings in contiguous rings. Quantum superposition of both states is shown in gray. Anesthetics (lower right) appear to disperse dipoles necessary for consciousness, resulting in anesthesia (Hameroff, 2006a; Hameroff, 1998c; Hameroff et al., 1982; Hameroff & Watt, 1983). Electron cloud dipoles may be either charge separation (electric) or electron spin (magnetic). Tubulin dipoles in Orch OR were originally described in terms of London-force electric dipoles, involving charge separation. However we now suggest, as an alternative, magnetic dipoles, which could be related to electron spin — and possibly related also to nuclear spins (which can remain isolated from their environments for long periods of time). ‘Spin-flips’ might perhaps relate to alternating currents (ACs) in MTs. Spin is inherently quantum in nature, and quantum spin transfer through aromatic rings is enhanced at warm temperature (Ouyang & Awschalom, 2003). In Figs. 6 and 7, yellow may be considered “spin up,” and blue considered “spin down.”
It should be made clear, however, that the notions of ‘up’ and ‘down’ referred to here need be figurative only. Yet, there are, in fact, directional aspects to the notion of spin; in essence, the spin direction is the direction of the axis of rotation, where conventionally we regard the rotational direction to be right-handed about the direction being referred to, and “up” would refer to some arbitrarily chosen spatial direction and “down” to the opposite direction. If the particle has a magnetic moment (e.g., electron, proton or neutron), its magnetic moment is aligned (or anti-aligned, according to the type of particle) with its spin. Within a microtubule, we might imagine “up” and “down” are chosen to refer to the two opposite directions along the tube’s axis itself, or else some other choice of alignment might be appropriate. However, as indicated earlier, spin is a quintessentially quantum-mechanical quantity, and for a spin-one-half object, like an electron or a nucleon (neutron or proton), all possible directions for the spin rotation axis arise as quantum superpositions of some arbitrarily chosen pair of directions. Indeed the directional features of quantum spin inter-relate with the quantum superposition principle in fundamental ways.
Here, we may speculate that chains of correlated (“up–up–up,” “down–down–down”) or possibly anti-correlated (“down–up–down,” “up–down–up”) spin along lattice pathways in microtubules or perhaps something more subtle might provide biologically plausible ways of propagating quantum bit pairs (qubits) along the pathways. If such correlated spin chains make physical sense, one might speculate that peri-odic spin-flip or spin-precession processes (either electric or magnetic) might occur, and could be correlated with ACs in microtubules at specific frequencies. Electron cloud dipoles can result from either charge separation (electric) or electron spin (magnetic). Tubulin dipoles in Orch OR were originally described in terms of London force electric dipoles, charge separation. However we now favor magnetic dipoles, e.g., related to electron spin, possibly enabling “spin-flip” ACs in MTs.
The group of Anirban Bandyopadhyay at National Institute for Material Sciences in Tsukuba, Japan, has indeed discovered conductive resonances in single microtubules that are observed when there is an applied AC at specific frequencies in gigahertz, megahertz and kilohertz ranges (Sahu et al., 2013a, 2013b; 2014). See Sec. 4.5.
Electron dipole shifts do have some tiny effect on nuclear positions via charge movements and Mossbauer recoil (Sataric et al., 1998; Brizhik et al., 2001). A shift of one nanometer in electron position might move a nearby carbon nucleus a few femtometers (“Fermi lengths,” i.e. 10−15 m), roughly its diameter. The effect of electron spin/magnetic dipoles on nuclear location is less clear. Recent Orch OR publications have cast tubulin bits (and quantum bits, or qubits) as coherent entangled dipole states acting collectively among electron clouds of aromatic amino acid rings, with only femtometer conformational change due to nuclear dis-placement (Penrose & Hameroff 2011; Hameroff, 2012). As it turns out, femotometer displacement might be sufficient for Orch OR (Sec. 5.2).
An intra-neuronal finer–scale of MT-based information processing could account for deviation from Hodgkin-Huxley behavior and, one might hope, enhanced computational capabilities. However, like neuronal models, approaches based on MT information processing with classical physics, e.g., those developed by Hameroff and colleagues up through the 1980’s, faced a reductionist dead-end in dealing with consciousness. Enhanced computation per se fails to address certain aspects of consciousness (Sec. 14.4.1.). Something was missing. Was it some subtle feature of quantum mechanics?
4. Quantum Physics and Consciousness
4.1. Non-computability and OR
In 1989 Penrose published The Emperor’s New Mind (Penrose, 1989), which was followed in 1994 by Shadows of the Mind (Penrose, 1994). Critical of the viewpoint of “strong artificial intelligence” (“strong AI”), according to which all mental processes are entirely computational, both books argued, by appealing to Gödel’s theorem and other considerations, that certain aspects of human consciousness, such as understanding, must be beyond the scope of any computational system, i.e., “non-computable.” Non-computability is a perfectly well-defined mathematical concept, but it had not previously been considered as a serious possibility for the result of physical actions. The non-computable ingredient required for human consciousness and understanding, Penrose suggested, would have to lie in an area where our current physical theories are fundamentally incomplete, though of important relevance to the scales that are pertinent to the operation of our brains. The only serious possibility was the incompleteness of quantum theory — an incompleteness that both Einstein and Schrödinger (and also Dirac) had recognized, despite quantum theory having frequently been argued to represent the pinnacle of 20th century scientific achievement. This incompleteness is the unresolved issue referred to as the “measurement problem,” which we consider in more detail below, in Sec. 4.3. One way to resolve it would be to provide an extension of the standard framework of quantum mechanics by introducing an objective form of quantum state reduction — termed “OR” (objective reduction), an idea which we also describe in detail further in Penrose (1992, 1996, 2000, 2009).
In Penrose (1989), the tentatively suggested OR proposal would have its onset determined by a condition referred to as “the one-graviton” criterion. However, in Penrose (1999, 2009), a much better-founded criterion was used, now frequently referred to as the Diósi–Penrose proposal (henceforth “DP;” see Diósi’s earlier work, which was a similar gravitational scheme, though not motivated via specific general relativistic principles). The DP proposal gives an objective physical threshold, providing a plausible lifetime for quantum-superposed states. Other gravitational OR proposals have been put forward, from time to time (Karolyhazy 1966; Karolyhazy et al., 1986; Percival, 1995; Ghirardi et al., 1990; Kibble 1981; Pearle, 1989; Pearle & Squires, 1994) as solutions to the measurement problem, suggesting modifications of standard quantum mechanics, but all these differ from DP in important respects. Among these, only the DP proposal (in its role within Orch OR) has been suggested as having anything to do with the consciousness issue. The DP proposal is sometimes referred to as a “quantum gravity” scheme, but it is not part of the normal ideas used in quantum gravity, as will be explained below (Sec. 4.4). Moreover, the proposed connection between consciousness and quantum measurement is almost opposite, in the Orch OR scheme, to the kind of idea that had frequently been put forward in the early days of quantum mechanics (see, for example Wigner, (1976) which suggests that a “quantum measurement” is something that occurs only as a result of the conscious intervention of an observer. Rather, the DP proposal suggests each OR event, which is a purely physical process, is itself a primitive kind of “observation,” a moment of “proto-conscious experience.” This issue, also, will be discussed below.
4.2. The nature of quantum mechanics
The term “quantum” refers to a discrete element of energy in a system, such as the energy E of a particle, or of some other subsystem, this energy being related to a fundamental frequency ν of its oscillation, according to Max Planck’s famous formula (where h is Planck’s constant):
This deep relation between discrete energy levels and frequencies of oscillation underlies the wave/particle duality inherent in quantum phenomena. Neither the word “particle” nor the word “wave” adequately conveys the true nature of a basic quantum entity, but both provide useful partial pictures.
The laws governing these submicroscopic quantum entities differ from those governing our everyday classical world. For example, quantum particles can exist in two or more states or locations simultaneously, where such a multiple coexisting superposition of alternatives (each alternative being weighted by a complex number) would be described mathematically by a quantum wavefunction. The measurement problem (referred to above) is, in effect, the question of why we do not see such superpositions in the consciously perceived macroscopic world; we see objects and particles as material, classical things in specific locations and states.
Another quantum property is “non-local entanglement,” in which separated components of a system become unified, the entire collection of components being governed by one common quantum wavefunction. The parts remain somehow connected, even when spatially separated by very significant distances [(the present experimental record being 143 km (Xiao et al., 2012)]. Quantum superpositions of bit states (quantum bits, or qubits) can be interconnected with one another through entanglement in quantum computers. However, quantum entanglements cannot, by themselves, be used to send a message from one part of an entangled system to another; yet entanglement can be used in conjunction with classical signaling to achieve strange effects — such as the phenomenon referred to as quantum teleportation — that classical signaling cannot achieve by itself (Bennett & Wiesner, 1992; Boouwmeester et al., 1997; Macikic et al., 2002).
4.3. The measurement problem and OR
The issue of why we do not directly perceive quantum superpositions is a manifestation of the measurement problem mentioned above. Put more precisely, the measurement problem is the conflict between the two fundamental procedures of quantum mechanics. One of these procedures, referred to as unitary evolution, denoted here by U, is the continuous deterministic evolution of the quantum state (i.e., of the wavefunction of the entire system) according to the fundamental Schrödinger equation. The other is the procedure that is adopted whenever a measurement of the system — or observation — is deemed to have taken place, where the quantum state is discontinuously and probabilistically replaced by another quantum state (referred to, technically, as an eigenstate of a mathematical operator that is taken to describe the measurement). This discontinuous jumping of the state is referred to as the reduction of the state (or the “collapse of the wavefunction”), and will be denoted here by the letter R. This conflict between U and R is what is encapsulated by the term “measurement problem” (but perhaps more accurately it may be referred to as “the measurement paradox”) and its problematic nature is made manifest when we consider the measuring apparatus itself as a quantum entity, which is part of the entire quantum system consisting of the original system under observation together with this measuring apparatus. The apparatus is, after all, constructed out of the same type of quantum ingredients (electrons, photons, protons, neutrons, etc. — or quarks, gluons, etc.) as is the system under observation, so it ought to be subject also to the same quantum laws, these being described in terms of the continuous and deterministic U. How, then, can the discontinuous and probabilistic R come about as a result of the interaction (measurement) between two parts of the quantum system? This is the paradox faced by the measurement problem.
There are many ways that quantum physicists have attempted to come to terms with this conflict (Bell, 1966; Bohm, 1983; Rae, 2002; Polkinghorne, 2002; Penrose, 2004). In the early 20th century, the Danish physicist Niels Bohr, together with Werner Heisenberg, proposed the pragmatic “Copenhagen interpretation,” according to which the wavefunction of a quantum system, evolving according to U, is not assigned any actual physical “reality,” but is taken as basically providing the needed “book-keeping” so that eventually probability values can be assigned to the various possible outcomes of a quantum measurement. The measuring device itself is explicitly taken to behave classically and no account is taken of the fact that the device is ultimately built from quantum-level constituents. The probabilities are calculated, once the nature of the measuring device is known, from the state that the wavefunction has U-evolved to at the time of the measurement. The discontinuous “jump” that the wavefunction makes upon measurement, according to R, is attributed to the change in “knowledge” that the result of the measurement has on the observer. Since the wavefunction is not assigned physical reality, but is considered to refer merely to the observer’s knowledge of the quantum system, the jumping is considered simply to reflect the jump in the observer’s knowledge state, rather than in the quantum system under consideration.
Many physicists remain unhappy with such a point of view, however, and regard it largely as a “stop-gap,” in order that progress can be made in applying the quantum formalism, without this progress being held up by a lack of a serious quantum ontology, which might provide a more complete picture of what is actually going on. One may ask, in particular, what it is about a measuring device that allows one to ignore the fact that it is itself made from quantum constituents and is permitted to be treated entirely classically. A good many proponents of the Copenhagen standpoint would take the view that while the physical measuring apparatus ought actually to be treated as a quantum system, and therefore part of an over-riding wavefunction evolving according to U, it would be the conscious observer, examining the readings on that device, who actually reduces the state, according to R, thereby assigning a physical reality to the particular observed alternative resulting from the measurement. Accordingly, before the intervention of the observer’s consciousness, the various alternatives of the result of the measurement including the different states of the measuring apparatus would, in effect, still have to be treated as coexisting in superposition, in accordance with what would be the usual evolution according to U. In this way, the Copenhagen viewpoint puts consciousness outside science, and does not seriously address the ontological nature or physical role of superposition itself nor the question of how large quantum superpositions like Schrödinger’s superposed live and dead cat (see below) might actually become one thing or another.
A more extreme variant of this approach is the “multiple worlds hypothesis” of Everett (1957) in which each possibility in a superposi-tion evolves to form its own universe, resulting in an infinite multitude of coexisting “parallel” worlds. The stream of consciousness of the observer is supposed somehow to “split,” so that there is one in each of the worlds — at least in those worlds for which the observer remains alive and conscious. Each instance of the observer’s consciousness expe-riences a separate independent world, and is not directly aware of any of the other worlds.
A more “down-to-earth” viewpoint is that of environmental decoher-ence, in which interaction of a superposition with its environment “erodes” quantum states, so that instead of a single wavefunction being used to describe the state, a more complicated entity is used, referred to as a density matrix. However, decoherence does not provide a consistent ontology for the reality of the world, in relation to the density matrix (see, for example, Penrose (1994), Secs. 29.3–29.6), and provides merely a pragmatic procedure. Moreover, it does not address the issue of how R might arise in isolated systems, nor the nature of isolation, in which an external “environment” would not be involved, nor does it tell us which part of a system is to be regarded as the ‘environment’ part, and it provides no limit to the size of that part which can remain subject to quantum superposition.
Still other approaches include various types of OR in which a specific objective threshold is proposed to cause quantum state reduction (Percival, 1994; Moroz et al., 1998; Ghirardi et al., 1986). The specific OR scheme that is used in Orch OR will be described below.
The quantum pioneer Erwin Schrödinger took pains to point out the difficulties that confront the U-evolution of a quantum system with his still-famous thought experiment called “Schrödinger’s cat” (Schrödinger, 1935). Here, the fate of a cat in a box is determined by magnifying a quan-tum event (say the decay of a radioactive atom, within a specific time period that would provide a 50% probability of decay) to a macroscopic action which would kill the cat, so that according to Schrödinger’s own U-evolution the cat would be in a quantum superposition of being both dead and alive at the same time. According to this perspective on the Copenhagen interpretation, if this U-evolution is maintained until the box is opened and the cat observed, then it would have to be the conscious human observing the cat that results in the cat becoming either dead or alive (unless, of course, the cat’s own consciousness could be considered to have already served this purpose). Schrödinger intended to illustrate the absurdity of the direct applicability of the rules of quantum mechanics (including his own U-evolution) when applied at the level of a cat. Like Einstein, he regarded quantum mechanics as an incomplete theory, and his ‘cat’ provided an excellent example for emphasizing this incompleteness. There is a need for something to be done about quantum mechanics, irrespective of the issue of its relevance to consciousness.
4.4. OR and quantum gravity
DP objective reduction is a particular proposal for an extension of current quantum mechanics, taking the bridge between quantum- and classical-level physics as a “quantum-gravitational” phenomenon. This is in con-trast with the various conventional viewpoints, whereby this bridge is claimed to result, somehow, from “environmental decoherence,” or from “observation by a conscious observer,” or from a “choice between alternative worlds,” or some other interpretation of how the classical world of one actual alternative may be taken to arise out of fundamentally quantum-superposed ingredients.
The DP version of OR involves a different interpretation of the term “quantum-gravity” from what is usual. Current ideas of quantum-gravity [(see, for example Smolin (2002)] normally refer, instead, to some sort of physical scheme that is to be formulated within the bounds of standard quantum field theory — although no particular such theory, among the multitude that has so far been put forward, has gained anything approaching universal acceptance, nor has any of them found a fully consistent, satisfactory formulation. ‘OR’ here refers to the alternative viewpoint that standard quantum (field) theory is not the final answer, and that the reduction R of the quantum state (“collapse of the wavefunc-tion”) that is adopted in standard quantum mechanics is an actual physi-cal process which is not part of the conventional unitary formalism U of quantum theory (or quantum field theory). In the DP version of OR, the reduction R of the quantum state does not arise as some kind of conveni-ence or effective consequence of environmental decoherence, etc., as the conventional U formalism would seem to demand, but is instead taken to be one of the consequences of melding together the principles of Einstein’s general relativity with those of the conventional unitary quantum formal-ism U, and this demands a departure from the strict rules of U. According to this OR viewpoint, any quantum measurement — whereby the quan-tum-superposed alternatives produced in accordance with the U formal-ism becomes reduced to a single actual occurrence — is a real objective physical process, and it is taken to result from the mass displacement between the alternatives being sufficient, in gravitational terms, for the superposition to become unstable.
In the DP scheme for OR, the superposition reduces to one of the alternatives in a timescale τ that can be estimated (for a superposition of two states each of which is assumed to be taken to be stationary on its own) according to the formula τ ≈ ℏ/EG. An important point to make about τ, however, is that it represents merely a kind of average time for the state reduction to take place. It is very much like a half-life in a radioactive decay. The actual time of decay in each individual state-reduction event, according to DP (in its current form), is taken to be a random process. Such an event would involve the entire (normally entangled) state, and would stretch across all the superposed material that is involved in the calculation of EG. According to DP (in its current form), the actual time of decay in a particular state-reduction event occurs simultaneously (in effect) over the entire state involved in the superposition, and it is taken to follow the τ ≈ ℏ/EG formula on the average (in a way similar to radioac-tive decay). Here ℏ ( = h/2π) is Dirac’s form of Planck’s constant h and EG is the gravitational self-energy of the difference between the two (stationary) mass distributions of the superposition. (For a superposition for which each mass distribution is a rigid translation of the other, EG is the energy it would cost to displace one component of the superposition in the gravi-tational field of the other, in moving it from coincidence to the quantum-displaced location (Penrose, 2002).)
It is helpful to have a conceptual picture of quantum superposition in a gravitational context. According to modern accepted physical theories, reality is rooted in three-dimensional space and a one-dimensional time, combined together into a four-dimensional space–time. This space–time is slightly curved, in accordance with Einstein’s general theory of rela-tivity, in a way which encodes the gravitational fields of all distributions of mass density. Each different choice of mass density effects a space– time curvature in a different, albeit very tiny, way. This is the standard picture according to classical physics. On the other hand, when quantum systems have been considered by physicists, this mass-induced tiny curvature in the structure of space–time has been almost invariably ignored, gravitational effects having been assumed to be totally insignificant for normal problems in which quantum theory is important. Surprising as it may seem, however, such tiny differences in space–time structure can have large effects, for they entail subtle but fundamental influences on the very rules of quantum mechanics (Penrose, 1992, 1996, 2000, 2009).
In the current context, superposed quantum states for which the respective mass distributions differ significantly from one another will have space–time geometries which also correspondingly differ. For illustration, in Fig. 8, we consider a two-dimensional space–time sheet (one space and one time dimension). In Fig. 8, at left, the top and bottom alter-native curvatures indicate a mass in two distinct locations. If that mass were in superposition of both locations, we might expect to see both curvatures, i.e. the bifurcating space-time depicted in the right of Fig. 8, this being the union (“glued together version”) of the two alternative space– time histories that are depicted on the left. The initial part of each space– time is at the upper left of each individual space–time diagram, and so the bifurcating space–time diagram on right moving downward and right-ward illustrates two alternative mass distributions evolving in time, their space–time curvature separation increasing.
To read the full article, please see the PDF.