Like most neuroscientists, I’ve often thought about consciousness. I’ve worried about free will. And then I’ve gotten goosebumps and given up when I realized that I was consciously, willfully thinking about how consciousness and free will are illusions. Michael Graziano of Princeton University, however, has doubled down and tried to formulate a coherent theory of consciousness. He calls it “Attention Schema Theory.” While it’s far from the only theory of consciousness out there, it’s intriguing enough to me to be worth further consideration here.
Before I describe Attention Schema Theory, let’s do a little preliminary thinking about thinking. Very little actual data exists that tells us much about the nature of consciousness – it is a hard problem (or, as philosopher David Chalmers put it, the hard problem) – but we do have a few things to work with.
Everyone feels that he or she is conscious.
When I write about consciousness, every single reader knows intuitively what I mean. To be reading and understanding this blog post, you’ve got to be conscious. Many philosophers, though, argue that we can only know for sure about our own consciousness. You could all be “zombies,” automatons programmed cleverly to reply my statements in an apparently conscious way.
We are predisposed to assume that others are conscious.
Despite the existence of the zombie theory, in everyday life most people assume other people are conscious. In fact, we go much further. We also often ascribe consciousness to animals (plausible in the case of animals with a reasonably complicated nervous system) as well as teddy bears, cartoon characters, and things with googly eyes stuck to them (even though we know, intellectually, that’s implausible). We even sometimes ascribe consciousness to computers – yelling at them when they break, pleading with them to do what we want, and being tricked into thinking they are human in (admittedly constrained) Turing tests. What about our own consciousness is so inclined to presume consciousness in others? Why do we tend to equate eyes and facial expressions with real emotions?
Consciousness is inaccurate.
Many discussions of consciousness focus on its definition as a state of awareness. Awareness, though, can be tricky. If I see a cat and think “a cat!” then I’m having a conscious experience of a cat, for sure. But if I see a crumpled rag and think, just for a moment, “a cat!” then did I just have a conscious experience of a cat? Basically, yes. Our consciousness is easily duped by illusions, which reveal that consciousness involves assumptions made by our brain that can be independent of sensory experience. The delusions suffered by some psychiatric patients offer a stark example. In the article that inspired this post, Michael Graziano describes a patient who knew he had a squirrel in his head, despite the fact he was aware it was an illogical belief and claimed no sensory experience of the squirrel. Another example of the inaccuracy of consciousness is one we can all experience. It’s called the “phi phenomenon.” Two dots flash on a screen sequentially. If the right timing is used, it appears that there’s only one dot, which moves from one place to the other. In other words, although the dot did not, in reality, slide smoothly across the screen from one place to the other, our consciousness inaccurately perceives motion. Daniel Dennett uses the phi example in his exposition of his “Multiple Drafts” model of consciousness.
Consciousness can be manipulated.
Self-awareness is not always what it seems. Humans are programmed to search for patterns and meaning and we are naturally inclined to attribute causation to correlated events even when no such relationship exists. We are suggestible. We can become even more suggestible and less autonomous when hypnotized. In numerous psychology studies, researchers have described various ways of reliably manipulating participants’ choices (for example using subtle peer pressure). Most of the time, the participants are not even aware of the manipulation and insist they are acting of their own free will. In addition to being a state of awareness, consciousness is conceived of as a feeling of selfhood, a sense of individuality that separates you from the rest of the world and allows you to find meaning in the words “me” and “you.” However, this feeling of selfhood can also be manipulated. Expert meditators such as Buddhist monks have trained themselves to erase this feeling of selfhood in order to experience a feeling of “oneness” while meditating. Brain scans of the meditating monks don’t provide a lot of details on the mechanisms underlying “oneness” but do suggest that the monks have learned to significantly alter their brain activity while mediating. A feeling of oneness can also be thrust upon you: Jill Bolte Taylor, the neuroscientist author of “My Stroke of Insight”, describes a feeling of oneness and loss of physical boundaries as her massive stroke progressed. Hallucinogenic drugs such as LSD can also provoke feelings of oneness. Out-of-body experiences fall into the same category: they can often be induced by meditation, drugs, near-death experiences, or direct brain stimulation of the temporoparietal junction. Damage to the temporoparietal junction on one side of the brain results in “hemispatial neglect,” in which a person essentially ignores the opposite side of their body and may even deny that this side of the body is a part of their self.
-----------------------
Now, let’s get back to Attention Schema Theory. What is this theory and how does it help fit some of our observations about consciousness together? Is it a testable theory? Can it help drive consciousness research forward? At the heart of Attention Schema Theory is an evolutionary hypothesis. It assumes that consciousness is real thing, physically represented in the brain, which evolved according to selective pressure. The first nervous systems were probably extremely simple, something like the jellyfish “nerve net” today. They were made to transduce an external stimulus into a signal within the organism that could be used to effect an adaptive action. The more information an organism could extract from its environment, the greater an advantage it had in surviving and reproducing in that environment, so many sophisticated sensory modalities developed. But there’s lots of information in the world – lights, sounds, smells, etc. coming at you from every angle all the time. It can be overwhelming and distracting. How do you know which bits of information are actually important to your survival and which can be ignored? The theory is that some kind of top-down control network formed that enhanced the most salient signals (think: a sudden crashing sound, the smell of food, anything that you previously learned means a predator is nearby). From this control network came attention. Attention allows you to focus on what’s important, but how do you know what’s important? Slowly, attention increased in sophistication. It went from, for example, always assuming the smell of food is attention-worthy to being able to decide whether it’s attention-worthy by modeling your own internal state of hunger. If you’re not hungry, it’s not worth paying attention to food cues. Finding shelter or a mate might be more important. According to Graziano, this internal model of attention is what constitutes self-awareness. Consciousness evolved so that you can relate information about yourself to the world around you in order to make intelligent decisions. But since consciousness is just a shorthand summary of an extremely complex array of signals, a little pocket reference version of the self, it involves simplifications and assumptions that make it slightly inaccurate.
Consciousness isn’t quantal. Basic self-awareness is only the beginning. What about being able to visualize alternative realities? What about logical reasoning abilities? What about self-reflection and self-doubt? Graziano does not address all of the aspects of consciousness that exist or how they might have evolved, but he does go on to talk a bit about how consciousness informs complex social behaviors. If you’re living in a society, it helps to be able to model what other people are thinking and feeling in order to interact with them productively. To do this, you have to understand consciousness in an abstract way. You have to understand that your consciousness is only your perspective, not an objective account of reality, and that adds an additional level of insight and self-reflection into the equation. It’s worth noting that there is a specific disorder in which this aspect of consciousness is impaired: autism.
-----------------------
Most of the appeal of Attention Schema Theory, to me, lies in its placement of consciousness as a fully integrated function of the brain. It doesn’t suppose any epiphenomenal aura that happens to be layered on top of normal brain function but that serves no real purpose. Instead, it says that consciousness is used in decision-making. It presents an evolutionary schema of why we might be conscious and also why we tend to attribute consciousness (especially emotions) to others. It explains, somewhat, why consciousness is inaccurate and malleable: it’s not built to represent everything about the real world faithfully, it’s just meant to be a handy reference schematic.
Attention Schema Theory isn’t entirely satisfying, though. It’s the outline of an interesting line of reasoning but not a complete thought. No actual brain mechanisms or areas are identified or even hypothesized. How is consciousness computed in the brain? I agree with Daniel Dennett that there’s no “Cartesian theater,” but there must be some identifiable principle of human brain circuit organization that allows consciousness. To move any theory of consciousness forward scientifically, we need a concrete hypothesis. But we don’t just need a hypothesis: we need a testable hypothesis. Without a way of experimentally measuring consciousness, the scientific method cannot be applied. Currently, our concept of consciousness stems only from our own self-reporting and, as mentioned above, the only consciousness you can really truly be sure of is your own.
Given the suppositions of Attention Schema Theory, though, there may be some proxies of consciousness we can study that would help us flesh out our understanding and piece together reasonable hypotheses. First, attention. Attention is by no means consciousness (I can tune a radio to a certain frequency but that doesn’t mean it’s conscious), but if consciousness evolved from attention then they should share some common mechanisms. Many neuroscientists already study attention, but they may not have considered their research findings in light of Attention Schema Theory. Perhaps there are already some principles of how brain circuits support selective attention that could be adapted and incorporated into Graziano’s schema. If consciousness really evolved from attention, then there should exist some “missing links,” organisms that display (or displayed) transitional states of consciousness somewhere between rudimentary top-down mechanisms for directing attention and the capacity for existential crises. Can we describe these links?
Second, theory of mind. Theory of mind is our ability to understand that other minds exist that may have different perspectives than our own. Having theory of mind should require a sophisticated version of consciousness, but the absence of theory of mind does not imply a lack of consciousness. You don’t need to be aware of others’ minds to be aware of your own. Most children with autism fail tests of theory of mind, but are still clearly conscious beings. Still, theory of mind and consciousness should be related if Graziano is right, and we know a few things about theory of mind. Functional imaging studies point towards the importance of the anterior paracingulate cortex as well as a few other brain areas in understanding the mental states of others. “Mirror neurons,” neurons that respond both when you perform an action and when you watch someone else perform that same action, have been discovered in the premotor cortex of monkeys, and some have argued that monkeys and chimpanzees have theory of mind. If they do, then we’d at least have a potential animal model to pursue further neurophysiological research (though the ethics of such research could be thorny). There is very little evidence to support theory of mind in lower mammals such as rats. However, in that case comparative anatomy studies of theory of mind-related brain areas identified by functional imaging could be informative. We already know of one interesting mostly hominid-specific class of neurons that exist in suggestive cortical areas (such as anterior cingulate cortex, dorsolateral prefrontal cortex, and frontoinsular cortex): spindle neurons, also known as von Economo neurons (actually, these neurons can also be found in whales and elephants!). Unfortunately, we have no idea what these neurons do, yet. Further studies of von Economo neurons could tell us about the mechanisms underlying theory of mind and, by extension, consciousness. Maybe.
I’ll be curious to see where Graziano goes with his Attention Schema Theory. It is, at the very least, a bold attempt at answering a question that has vexed humanity through the ages. I wonder, though, whether the question can ever be answered. Perhaps you are now inspired to go out and do some awesome research. I, for one, am getting goosebumps again, so I think I’ll take a break.