NeuroTalk S2E2: Diana Bautista

DefaultBlogImg.png

Each week the Stanford Neurosciences Institute (SNI) invites a prominent scientist to come to campus and share their most recent work with the Stanford community. Each week, as part of the Neuwrite West podcast NeuroTalk, we engage the SNI speaker in an informal interview/conversation, with the aim of gaining a better insight into the speaker’s personality, and providing a platform for the kinds of stories which are of interest to us but are often left out of more formal papers or presentations. This week, we talk to Diana Bautista about the difference between itch and pain, the curious organ of the star-nosed mole, and more! Dr. Bautista is an assistant professor of molecular and cellular biology at the University of California at Berkeley.

This week, we talk to Diana Bautista about the difference between itch and pain, the curious organ of the star-nosed mole, and more! Dr. Bautista is an assistant professor of molecular and cellular biology at the University of California at Berkeley.


Other listening options: Our conversation with professor Bautista can be streamed or downloaded here: NeuroTalk S2E2 Diana Bautista You can also subscribe to NeuroTalk though iTunes by searching for "Neuwritewest" in the iTunes store and subscribing to our channel.

Please let us know if you have any trouble accessing the podcast.

Thanks, and enjoy!

On behalf of NeuWrite West, Erica Seigneur Forrest Collman Mark Padolina

Are you there, God? It’s me, dopamine neuron

Are you there, God? It’s me, dopamine neuron

Dopamine neurons are some of the most studied, most sensationalized neurons out there. Lately, though, they’ve been going through a bit of an identity crisis. What is a dopamine neuron? Some interesting recent twists in dopamine research have definitively debunked the myth that dopamine neurons are all of a kind – and you should question any study that treats them as such.

Read More

BRAIN Initiative Interim Report: A Readers Guide

BRAIN Initiative Interim Report: A Readers Guide

Weighing in at 58 pages, the Interim Report of the BRAIN Working Group (online version, here) is a detailed document that identifies and discusses eight research areas that were determined by the working group (with help from expert consultants, aka additional neuroscientists) to be high priority areas for the 2014 fiscal year. So what are these high priority research areas? How closely do they hew to ongoing research areas long acknowledged as important by the neuroscience community? How much do they rely on recruiting non-neuroscientists to research teams? How clearly do these areas address the Presidential mandate of the BRAIN Initiative? Will these goals help us to elucidate the importance of the Initiative, both in our minds and in the minds of the general public?

What follows are my impressions of the critical points contained within each of the eight sections that make up the body of the Interim Report.

Read More

NeuroTalk S2E1: Yun Zhang

DefaultBlogImg.png

Welcome to the new year of school, and a new year of NeuroTalk! In the first episode of our second season, our guest is Yun Zhang, an associate professor of biology at Harvard University. We speak with professor Zhang about growing up in science, and studying learning and behavior in C.elegans!

Note to listeners: we had some connectivity issues while conducting the interview, so the audio quality is not as good in some places.

Welcome to the new year of school, and a new year of NeuroTalk! In our first episode of our second season, we speak with Yun Zhang about growing up in science, and learning and behavior in the nematode C.elegans! Yun Zhang is an associate professor of biology at Harvard University.


You can also stream or download this NeuroTalk here: 

NeuroTalk S2E1 Yun Zhang

Season 1 of NeuroTalk is still available for your listening pleasure here:

NeuroTalk Archive

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Thinking outside the gene

  Our DNA contains the code that builds the bodies we call ourselves. These days, we are used to hearing about genes: phrases of DNA, read out by cellular machinery to construct the components of our bodies. We are used to the idea that mutations in our genes, changes or mistakes in the code, can make people sick. But the code written into our DNA is not as static or inflexible as we might imagine and it is not only your genetic sequence that has an effect on your physical traits (phenotype). Cells have layer upon layer of processes that control when and how much a gene is expressed, introducing complexity at multiple levels. Not only (as it often seems) to frustrate scientists, but rather to confer the redundancy, flexibility and robustness that allow development and survival to continue in the face of environmental change. One group at Columbia University is now looking at the role played by these extra levels of regulation in age-related memory loss. The reason some people experience memory loss in old age and others don’t, may have nothing to do with which genes you have. Rather, the difference may lie in how and when your cells express those genes.

We have a storage problem. At the risk of repeating a decades-old factoid, the DNA contained within a single cell is around 2 metres long. The average diameter of a human cell is 10 micrometres giving a shortfall of space in the order of two hundred thousand. Somehow all that DNA has to fit inside the cell, and histone proteins are the contortionists that make that possible. By winding DNA around itself, then around histone proteins, then winding those around each other, then winding that again a few more times, our cells can cram in all the DNA necessary to code for everything that makes us human. But now we have a new problem. If the code we need is in the middle of a tangled mess of other code and wrapped around bulky proteins, which are then crammed together even further, how can that code be accessed? This is where epigenetics comes in. Epigenetics is a rather vague term used to describe a whole host of strategies used by cells to regulate the expression of genes. But why do cells have to regulate gene expression? And how does that relate to the problem of genetic storage? Every single cell in your body* contains all of your genetic information. In other words, a single cell in your skin (or anywhere else, for that matter) contains all the information necessary to make any other cell in the body and, theoretically, could be reprogrammed to become any other cell. But a skin cell has no use for, say, proteins used to send nervous impulses, and can exploit this position of limited need to tackle the problem of genetic storage. Cells don’t need to access the entire genetic code all of the time. There are things in there, for example, that are used when we’re developing in the womb, but have no function once we’re out in the wide world. These genes, then, can be archived – set aside to be passed on to our offspring for their in utero development. By selecting which genes are buried away and which are close to the surface, ready to be decoded, the cell can perform efficiently and still house the entire human genome.

Epigenetic_mechanisms

Epigenetic_mechanisms

Figure from http://www.beginbeforebirth.org/the-science/epigenetics

The amount of control imparted by epigenetic mechanisms is only just beginning to be appreciated. Perhaps from fear of a return to Lamarckism, there was a reluctance in the scientific community to attribute heritable changes to anything other than mutations in DNA. However, we now know that differences in phenotype can be the result of processes other than changes in genetic sequence. These epigenetic mechanisms have been shown not only to influence an organism’s phenotype, but also to have the capacity to be inherited by offspring. That is to say that two organisms can have a different phenotype, not because their genetic sequence is different, but because their parents regulated the expression of that gene in different ways. One highly visual example of this is the Agouti mouse, in which the coat colour of the offspring can be influenced by supplements given to the mother during pregnancy. Expose the mother to bisphenol A (BPA) and her offspring are more likely to be yellow. Without BPA, they come out brown [1].

Agouti_2

Agouti_2

Figure modified from reference 1.

In this recent paper on memory loss [2], the authors wanted to look at what causes age-related memory loss and how it differs, if indeed it does differ, from Alzheimer’s disease. Previous studies have suggested that Alzheimer’s primarily affects an area of the hippocampus called the entorhinal cortex. In contrast, normal ageing (which is also associated with memory loss) involves changes in a different part of the hippocampus – the dentate gyrus [3]. With this in mind, the authors took brain tissue from post-mortem samples of healthy people to look for differences between the entorhinal cortex and the dentate gyrus. They looked at changes in gene expression that were associated with age by measuring how much of each gene was being expressed in each brain region and matching expression level to the age of the person. One difference they saw was in the dentate gyrus, which showed a large, age-related decrease in the expression of an enzyme (RbAp48) that modifies histone proteins. These, remember, provide a scaffold for DNA and help to determine which genes are accessible and which are archived. This finding suggested that age-related memory loss may not be the result of a person having a defective gene, but rather the result of incorrect genetic archiving. As is usual in this kind of study, they turned to a mouse model to look at this enzyme in more detail. By breeding mice unable to make RbAp48, they were able to show that this enzyme is necessary for normal memory: mice lacking RbAp48 performed worse on memory tests (navigating a maze or recognising an unfamiliar object). As mice get older, their memory appears to deteriorate based on tests like this, and mice lacking RbAp48 experienced this deterioration at a younger age than mice with normal levels of RbAp48. When looking at human brains, the decrease in RbAp48 wasn’t seen in the area of the brain associated with Alzheimer’s disease, suggesting that age-related memory loss has a unique starting point and is not just an early sign of Alzheimer’s. This could have important consequences in the future for diagnostics.

The more we learn about epigenetics, the more obvious it becomes that there is more to go wrong than we thought. You not only need the right genes, but you need the right control mechanisms in place to make sure have the right amount of each gene in each cell at all times throughout life. At the same time, we know that most people manage this, reflecting the amazing robustness of the system. Increasing our understanding of these control mechanisms has implications for treatment too. By looking at the underlying cause of a disease, we can treat it more effectively. This has been going on for decades in infection research, but may be applied more to other diseases in the future. For example, two patients presenting with fever and breathing difficulties will be tested for pneumonia. One may have a fungal infection and the other a bacterial infection. These need to be treated very differently, but only a knowledge of the underlying cause can tell us how to treat each patient. Similarly, treatment may be very different for someone lacking a gene completely compared with someone who has the gene in an inaccessible place. Both patients would have the same symptoms, but an analysis of the underlying causes could completely change the nature of the treatment. It is this sort of personalised diagnosis that could help to provide the right treatment for a patient; which would not only help the patient recover more quickly, but could also help to reduce the amount of money wasted on ineffective treatments.

*There are a few notable exceptions. Red blood cells have no nucleus and contain no genetic DNA. Egg/sperm cells have half the amount of DNA as the rest of your cells to make sure an embryo has the correct amount after fusion.

Jargon box

Histone: a type of protein used as a scaffold for DNA. DNA molecules wind themselves around histones to reduce the amount of space needed to house the genome.

Phenotype: observable characteristics of an organism from visible traits e.g. hair colour to cellular traits e.g. cell shape or structure.

References

1)    Dolinoy et al. Maternal nutrient supplementation counteracts bisphenol A-induced DNA hypomethylation in early development. Proc Natl Acad Sci U S A. (2007) 104 (32): 13056–13061. Link. OPEN ACCESS!

2)    Pavlopoulos et al. Molecular Mechanism for Age-Related Memory Loss: The Histone-Binding Protein RbAp48. Science Translational Medicine (2013) 200 (5): 200. Link.

3)    Small et al. A pathophysiological framework of hippocampal dysfunction in ageing and disease. Nat. Rev. Neurosci. (2011) 12: 585–601. Link. OPEN ACCESS!

How To Train Your Brain (Part II)

Can playing a game improve your cognitive abilities or maintain them as you age? We learned from Erica Seigneur’s post on August 15 that evidence in the neuroscience literature is inconclusive. But a new paper in the September 5 issue of Nature claims to have a breakthrough (1). Dr. Joaquin Anguera and colleagues at UCSF trained older adults to multi-task with a custom-made video game called NeuroRacer and declared big improvements not just in multi-tasking but also in working memory and sustained attention. How are their experiments different from those that reported no effect of brain-training games? Anguera and colleagues focused narrowly on improving multi-tasking in older adults to or above the level of multi-tasking ability found in younger adults. They designed NeuroRacer to get participants to simultaneously drive a virtual car and respond to signs flashing on the computer screen. Both the driving and the responding to signs had many levels of difficulty. For each participant, the authors picked a difficulty level of driving and of responding that the participant could do with 80% accuracy. They defined multi-tasking ability as the difference in accuracy between only responding to signs and responding to signs while driving, with smaller difference indicating greater ability. After these preparations, they measured baseline multi-tasking ability for participants aged 20 to 79 and found a linear decline with age. Then they trained a different group of participants aged 60 to 85 with NeuroRacer for one hour three times a week for four weeks, adapting the difficulty levels as participants got better at the game. An active control group, also aged 60 to 85, played a version of NeuroRacer that would alternate between driving and responding to signs without multi-tasking, but was counseled to believe that they were also training in multi-tasking. A passive control group from the same age group did not play NeuroRacer. At the end of 4 weeks of training, both the experimental and the active control groups could multi-task better than passive controls, and the experimental group was better than active controls. About 6 months after training, the experimental group had lost some multi-tasking ability but was still better than not only both control groups but also a group of 20-year-olds that played NeuroRacer for the first time. On the basis of these results, Anguera and colleagues declared success in using NeuroRacer to improve multi-tasking in older adults.

But did the participants actually improve their cognitive abilities or just got really good at NeuroRacer? To address that, Anguera and colleagues put the participants they trained through more tests. They stuck electrodes to their scalps and measured electrical signals from the brain, called theta waves, that have been correlated with multi-tasking, sustained attention, working memory, and general cognitive control, which I interpret to mean healthy brain. They asked participants to complete another video-game-based test called the Test of Variables of Attention (TOVA), which is commonly used to diagnose ADHD (2). From the results, they declared improved sustained attention. Though they also claimed improvements in working memory, they offered only the briefest of descriptions for their method of testing it in Supplementary Figure 12, and it wasn’t sufficient for me to judge its merits. However, their measurements of theta waves are also supposed to support this claim. In all, Anguera and colleagues went to great lengths to demonstrate general cognitive benefit from NeuroRacer to older adults.

But Anguera and colleagues themselves cite a Nature paper from 2010 by Dr. Adrian M. Owen and others that tested many more participants with brain-training games similar to commercially available ones and reported no evidence of general cognitive benefit from their use. What’s going on here? Anguera and colleagues point out that, unlike Owen and co-workers who tested people from the general population, they trained members of a specific sub-population, older adults, in something where they had a measurable impairment, i.e. multi-tasking. They also stressed that because NeuroRacer adapts its difficulty to the abilities of each user, it provides a consistent challenge and more effective training. Anguera’s supervisor Dr. Adam Gazzaley co-founded Akili Interactive Labs to commercialize the concept of NeuroRacer, so perhaps in a few years we will be able to test it out for ourselves (4). In the meantime, let’s set aside the question of benefit from video games and just appreciate how much fun they are.

 Sources

  1. Anguera J A et al. (2013). “Video game training enhances cognitive control in older adults.” Nature. 501:97-101. Paywall.
  1. http://www.tovatest.com
  2. Owen A M et al. (2010). “Putting brain training to the test.” Nature. 465:775–778.
  3. http://www.ucsf.edu/news/2013/09/108616/training-older-brain-3-d-video-game-enhances-cognitive-control

Thomas Südhof and Richard Scheller receive 2013 Lasker Basic Medical Research Award

This morning, the Lasker Foundation announced the recipients of the 2013 Lasker Basic Medical Research Awards. The prize went to Stanford professor Thomas Südhof and former Stanford professor (current Executive VP of Genetech) Richard Scheller, for their work on the molecular machinery underlying rapid release of neurotransmitters. Specifically celebrated are their discoveries of VAMP, synaptobrevin, synaptotagmin, syntaxin, and many additional components of synaptic release machinery.

The Lasker Foundation concluded that:

By systematically exposing and analyzing the proteins involved in neurotransmitter release, Südhof and Scheller have transformed our description of the process from a rough outline to a series of nuanced molecular transactions. Their work has revealed the elaborate orchestrations that lie at the crux of our most simple and sophisticated neurobiological activities. (1)

The Lasker Basic Medical Research Award is given to scientists "whose fundamental investigations have provided techniques, information, or concepts contributing to the elimination of major causes of disability and death (1)." For more on the Awards, visit the Lasker Foundation Award Overview webpage.

For more on the groundbreaking work for which Südhof and Scheller received their award, visit the Lasker Foundation Award Description webpage.

An interview with Südhof and Scheller is also available for your viewing pleasure. Video of the award presentation and acceptance speeches will be available at the Lasker Foundation website, after 2 p.m. on Friday, September 20, 2013

Many congratulations to Drs. Südhof and Scheller!

Other Lasker Awards announced today were:

The Lasker-DeBakey Clinical Medical Research Award, given to Graeme Clark, Ingeborg Hochmair and Blake Wilson, for their development of the modern cochlear implant.

The Lasker-Bloomberg Public Service Award, given to Bill and Melinda Gates, for the work achieved through their foundation.

 

Sources

Evelyn Strauss, Albert Lasker Basic Medical Research Award Description.

Comment

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Thinking about Thinking

Like most neuroscientists, I’ve often thought about consciousness. I’ve worried about free will. And then I’ve gotten goosebumps and given up when I realized that I was consciously, willfully thinking about how consciousness and free will are illusions. Michael Graziano of Princeton University, however, has doubled down and tried to formulate a coherent theory of consciousness. He calls it “Attention Schema Theory.” While it’s far from the only theory of consciousness out there, it’s intriguing enough to me to be worth further consideration here.

Before I describe Attention Schema Theory, let’s do a little preliminary thinking about thinking. Very little actual data exists that tells us much about the nature of consciousness – it is a hard problem (or, as philosopher David Chalmers put it, the hard problem) – but we do have a few things to work with.

Everyone feels that he or she is conscious.

When I write about consciousness, every single reader knows intuitively what I mean. To be reading and understanding this blog post, you’ve got to be conscious. Many philosophers, though, argue that we can only know for sure about our own consciousness. You could all be “zombies,” automatons programmed cleverly to reply my statements in an apparently conscious way.

We are predisposed to assume that others are conscious.

Despite the existence of the zombie theory, in everyday life most people assume other people are conscious. In fact, we go much further. We also often ascribe consciousness to animals (plausible in the case of animals with a reasonably complicated nervous system) as well as teddy bears, cartoon characters, and things with googly eyes stuck to them (even though we know, intellectually, that’s implausible). We even sometimes ascribe consciousness to computers – yelling at them when they break, pleading with them to do what we want, and being tricked into thinking they are human in (admittedly constrained) Turing tests. What about our own consciousness is so inclined to presume consciousness in others? Why do we tend to equate eyes and facial expressions with real emotions?

"Heavy on the Nose" via eyebombing.com

Consciousness is inaccurate.

Many discussions of consciousness focus on its definition as a state of awareness. Awareness, though, can be tricky. If I see a cat and think “a cat!” then I’m having a conscious experience of a cat, for sure. But if I see a crumpled rag and think, just for a moment, “a cat!” then did I just have a conscious experience of a cat? Basically, yes. Our consciousness is easily duped by illusions, which reveal that consciousness involves assumptions made by our brain that can be independent of sensory experience. The delusions suffered by some psychiatric patients offer a stark example. In the article that inspired this post, Michael Graziano describes a patient who knew he had a squirrel in his head, despite the fact he was aware it was an illogical belief and claimed no sensory experience of the squirrel. Another example of the inaccuracy of consciousness is one we can all experience. It’s called the “phi phenomenon.” Two dots flash on a screen sequentially. If the right timing is used, it appears that there’s only one dot, which moves from one place to the other. In other words, although the dot did not, in reality, slide smoothly across the screen from one place to the other, our consciousness inaccurately perceives motion. Daniel Dennett uses the phi example in his exposition of his “Multiple Drafts” model of consciousness.

Consciousness can be manipulated.

Self-awareness is not always what it seems. Humans are programmed to search for patterns and meaning and we are naturally inclined to attribute causation to correlated events even when no such relationship exists. We are suggestible. We can become even more suggestible and less autonomous when hypnotized.  In numerous psychology studies, researchers have described various ways of reliably manipulating participants’ choices (for example using subtle peer pressure). Most of the time, the participants are not even aware of the manipulation and insist they are acting of their own free will. In addition to being a state of awareness, consciousness is conceived of as a feeling of selfhood, a sense of individuality that separates you from the rest of the world and allows you to find meaning in the words “me” and “you.” However, this feeling of selfhood can also be manipulated. Expert meditators such as Buddhist monks have trained themselves to erase this feeling of selfhood in order to experience a feeling of “oneness” while meditating. Brain scans of the meditating monks don’t provide a lot of details on the mechanisms underlying “oneness” but do suggest that the monks have learned to significantly alter their brain activity while mediating. A feeling of oneness can also be thrust upon you: Jill Bolte Taylor, the neuroscientist author of “My Stroke of Insight”, describes a feeling of oneness and loss of physical boundaries as her massive stroke progressed. Hallucinogenic drugs such as LSD can also provoke feelings of oneness. Out-of-body experiences fall into the same category: they can often be induced by meditation, drugs, near-death experiences, or direct brain stimulation of the temporoparietal junction. Damage to the temporoparietal junction on one side of the brain results in “hemispatial neglect,” in which a person essentially ignores the opposite side of their body and may even deny that this side of the body is a part of their self.

-----------------------

Now, let’s get back to Attention Schema Theory. What is this theory and how does it help fit some of our observations about consciousness together? Is it a testable theory? Can it help drive consciousness research forward? At the heart of Attention Schema Theory is an evolutionary hypothesis. It assumes that consciousness is real thing, physically represented in the brain, which evolved according to selective pressure. The first nervous systems were probably extremely simple, something like the jellyfish “nerve net” today. They were made to transduce an external stimulus into a signal within the organism that could be used to effect an adaptive action. The more information an organism could extract from its environment, the greater an advantage it had in surviving and reproducing in that environment, so many sophisticated sensory modalities developed. But there’s lots of information in the world – lights, sounds, smells, etc. coming at you from every angle all the time. It can be overwhelming and distracting. How do you know which bits of information are actually important to your survival and which can be ignored? The theory is that some kind of top-down control network formed that enhanced the most salient signals (think: a sudden crashing sound, the smell of food, anything that you previously learned means a predator is nearby). From this control network came attention. Attention allows you to focus on what’s important, but how do you know what’s important? Slowly, attention increased in sophistication. It went from, for example, always assuming the smell of food is attention-worthy to being able to decide whether it’s attention-worthy by modeling your own internal state of hunger. If you’re not hungry, it’s not worth paying attention to food cues. Finding shelter or a mate might be more important. According to Graziano, this internal model of attention is what constitutes self-awareness. Consciousness evolved so that you can relate information about yourself to the world around you in order to make intelligent decisions. But since consciousness is just a shorthand summary of an extremely complex array of signals, a little pocket reference version of the self, it involves simplifications and assumptions that make it slightly inaccurate.

Attention Schema Theory at a glance: selective signal enhancement to consciousness. via Granziano Lab Website

Consciousness isn’t quantal. Basic self-awareness is only the beginning. What about being able to visualize alternative realities? What about logical reasoning abilities? What about self-reflection and self-doubt? Graziano does not address all of the aspects of consciousness that exist or how they might have evolved, but he does go on to talk a bit about how consciousness informs complex social behaviors. If you’re living in a society, it helps to be able to model what other people are thinking and feeling in order to interact with them productively. To do this, you have to understand consciousness in an abstract way. You have to understand that your consciousness is only your perspective, not an objective account of reality, and that adds an additional level of insight and self-reflection into the equation.  It’s worth noting that there is a specific disorder in which this aspect of consciousness is impaired: autism.

-----------------------

Most of the appeal of Attention Schema Theory, to me, lies in its placement of consciousness as a fully integrated function of the brain. It doesn’t suppose any epiphenomenal aura that happens to be layered on top of normal brain function but that serves no real purpose. Instead, it says that consciousness is used in decision-making. It presents an evolutionary schema of why we might be conscious and also why we tend to attribute consciousness (especially emotions) to others. It explains, somewhat, why consciousness is inaccurate and malleable: it’s not built to represent everything about the real world faithfully, it’s just meant to be a handy reference schematic.

Attention Schema Theory isn’t entirely satisfying, though. It’s the outline of an interesting line of reasoning but not a complete thought. No actual brain mechanisms or areas are identified or even hypothesized. How is consciousness computed in the brain? I agree with Daniel Dennett that there’s no “Cartesian theater,” but there must be some identifiable principle of human brain circuit organization that allows consciousness. To move any theory of consciousness forward scientifically, we need a concrete hypothesis. But we don’t just need a hypothesis: we need a testable hypothesis. Without a way of experimentally measuring consciousness, the scientific method cannot be applied. Currently, our concept of consciousness stems only from our own self-reporting and, as mentioned above, the only consciousness you can really truly be sure of is your own.

Given the suppositions of Attention Schema Theory, though, there may be some proxies of consciousness we can study that would help us flesh out our understanding and piece together reasonable hypotheses. First, attention. Attention is by no means consciousness (I can tune a radio to a certain frequency but that doesn’t mean it’s conscious), but if consciousness evolved from attention then they should share some common mechanisms. Many neuroscientists already study attention, but they may not have considered their research findings in light of Attention Schema Theory. Perhaps there are already some principles of how brain circuits support selective attention that could be adapted and incorporated into Graziano’s schema. If consciousness really evolved from attention, then there should exist some “missing links,” organisms that display (or displayed) transitional states of consciousness somewhere between rudimentary top-down mechanisms for directing attention and the capacity for existential crises. Can we describe these links?

Second, theory of mind. Theory of mind is our ability to understand that other minds exist that may have different perspectives than our own. Having theory of mind should require a sophisticated version of consciousness, but the absence of theory of mind does not imply a lack of consciousness. You don’t need to be aware of others’ minds to be aware of your own. Most children with autism fail tests of theory of mind, but are still clearly conscious beings. Still, theory of mind and consciousness should be related if Graziano is right, and we know a few things about theory of mind. Functional imaging studies point towards the importance of the anterior paracingulate cortex as well as a few other brain areas in understanding the mental states of others. “Mirror neurons,” neurons that respond both when you perform an action and when you watch someone else perform that same action, have been discovered in the premotor cortex of monkeys, and some have argued that monkeys and chimpanzees have theory of mind.  If they do, then we’d at least have a potential animal model to pursue further neurophysiological research (though the ethics of such research could be thorny). There is very little evidence to support theory of mind in lower mammals such as rats. However, in that case comparative anatomy studies of theory of mind-related brain areas identified by functional imaging could be informative. We already know of one interesting mostly hominid-specific class of neurons that exist in suggestive cortical areas (such as anterior cingulate cortex, dorsolateral prefrontal cortex, and frontoinsular cortex): spindle neurons, also known as von Economo neurons (actually, these neurons can also be found in whales and elephants!). Unfortunately, we have no idea what these neurons do, yet. Further studies of von Economo neurons could tell us about the mechanisms underlying theory of mind and, by extension, consciousness. Maybe.

Location of von Economo neurons. via Neuron Bank

I’ll be curious to see where Graziano goes with his Attention Schema Theory. It is, at the very least, a bold attempt at answering a question that has vexed humanity through the ages. I wonder, though, whether the question can ever be answered. Perhaps you are now inspired to go out and do some awesome research. I, for one, am getting goosebumps again, so I think I’ll take a break.

 

Ask a Neuroscientist! - What is the synaptic firing rate of the human brain?

AskANeuroscientist.jpg

A couple of days ago, we received an email from a high school student named Joseph. Joseph, having spent some time trawling the net and his library, found himself with no answer to the question, "How many synaptic fires [sic] are there (in a human brain) per second?"

An edited version of my response to the question appears below. In my response, I break down the neural complexity that makes answering Joseph's question extremely difficult. I then totally ignore that complexity in order to produce two firing rate ranges: neuronal and synaptic.

Do you have an additional mathematical solution to Joseph's question? A philosophical objection to the idea of quantifying the human brain in terms of synaptic firing rates? The comments section is where it usually is. 

What is the synaptic firing rate of the human brain?

I'd like to be able to provide a single number, but in reality, the brain is pretty complex, so it's difficult to come up with a single number to describe the firing rate of the entire brain.

But let's explore the question a bit.

First, I'm going to simplify the question by looking at neuronal firing rates, rather than synaptic firing rates. So from that starting point, if we wanted to be really simplistic about the whole thing, we could estimate a firing rate. Such a calculation might go something like this:

Current estimates of the number of neurons in the human brain are around 86 billion (see Bradley Voytek's discussion of how scientists extrapolate this numberalso this article on the estimate). Multiply 86 billion by the "average firing rate" of an individual neuron.

Now, the firing rate of an individual neuron can vary quite a bit, and the firing rates of different types of neurons also is extremely variable. This variance in part depends on the intrinsic properties of the neurons (some a tuned to fire very, very quickly - 200+ Hz, whereas some neuron types prefer to fire more slowly, below the 10Hz range). A lot of the variance also depends on what the brain is doing. For example, neurons in the visual system may be practically silent in the dark (or during sleep), but will be firing very fast when visual information is coursing into the nervous system from the eyes. And the exact rate of firing in visual neurons is going to depend on the properties of the visual stimulus (how bright, how fast it moves, what color). Similarly, neurons in your hippocampus, a brain structure important for memory and spatial navigation, may fire quickly as you walk around your room, but may be relatively quiet as you sit in front of a computer reading this email.

All this variability is what makes it so hard to estimate the firing rate of a human brain at any given second.

But, if you press me for a back-of-the envelope calculation, I'd say the best way to estimate the firing rate of a neuron is to come up with a potential range. Now, there's probably been a bunch of research on the distribution of firing rates within various cell populations, and quite frankly, I'd only really believe that rate in the context of a particular activity you are interested (rates can change dramatically between passive sitting and active participation in a task). But generally, the range for a "typical" neuron is probably from <1 Hz (1 spike per second) to ~200 Hz (200 spikes per second).

To ruthlessly simplify, treating all 86 billion neurons in the human brain as copies of that a single "typical" neuron, ignoring all of the glorious cellular specificity that characterizes the brain, we're left with a range of 86 billion to 17.2 trillion action potentials per second.

Let's go back to the question of synaptic firing rates. Even though an action potential produced in a neuron is not guaranteed to produce release of neurotransmitter at a synapse, let's ignore that point and assume the opposite. I've seen people quote a minimum number of synapses as 100 trillion (although I'm not clear where that number came from). So, let's do our math again. 100 trillion synapses, each with an independent firing rate range of < 1Hz to ~200 Hz. So a range of 100 trillion to 20 quadrillion.

Again, and I really cannot stress this enough, these numbers doing reflect what actually goes on in a human brain in any given second. The actual firing rate depends so much on what the brain is doing at that moment, that back-of-the-envelope calculations such as the ones I just wrote down are (in my opinion) absolutely meaningless. But for what its worth, there they are. And if these numbers at least give us a range, you can imagine the sheer computational power that will be required to record all the neurons the human brain.

Ask a Question!

If you have a question for one of our neuroscientist contributors, email Astra Bryant at stanfordneuro@gmail.com, or leave your question in the comment box below.

4 Comments /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Perineuronal Nets, aka Golgi vs Cajal (Round 2)

PNN.png

I'm going to tell you a story about the perineuronal net. The what, now? I hear (some of) you cry.

If neurons and glia are the plants of our brains, the extracellular matrix is the trellis upon which those plants grow and intertwine. The peri neuronal net is a specialized portion of the extracellular matrix, surrounding (primarily) the soma and proximal dendrites of parvalbumin positive interneurons. The mesh-like structure of the perineuronal net, holes accommodating synaptic contacts onto the embedded neurons, appears critical for forming and stabilizing synapses.

The appearance of the perineuronal net coincides with the closing of critical periods; enzymatic breakdown of the perineuronal net can reinstate ocular dominance plasticity in adult animals. (This flavor of plasticity can usually only be triggered in juveniles.) Also, degrading the perineuronal net allows extinction training to fully eliminate fear conditioning in adult animals, a feat usually only possible in juvenile animals (adults respond to extinction training with only a temporary inhibition of fear responses) (1). Thus, many independent studies implicate the perineuronal net as a negative regulator of plasticity. The net stabilizes synapses, preventing unwanted change within established brain circuits.

Interestingly, the perineuronal net appears damaged following status epilepticus, a prolonged seizure event that commonly triggers epileptogenesis, and is followed by axonal sprouting and enhanced synapse numbers within the hippocampus (1). The loss of the perineuronal net may establish a permissive environment for the widespread synaptic reorganization that occurs during temporal lobe epileptogenesis.

So to summarize: the perineuronal net, and its parent structure, the extracellular matrix, may be important for establishing the synaptic stability that maintains the delicate interconnections of the nervous system.

So, the story.

Epic Science Battles of History: The Aftermath

The first thing you need to understand is that it took until the 1980s for the perineuronal net to be accepted as an interesting, important, and in fact existing, structure. Despite the first published description occurring almost 100 years earlier, in 1898.

Why the long delay?

Turns out the perineuronal net was a victim of the fallout of probably the most famous science fight in neuroscience.

That's right, this story involves Santiago Ramon y Cajal and Camillo Golgi.

Cajal and Golgi are, of course, well known for their decades long disagreement over whether the nervous system is a continuous network (the reticular theory, Golgi's view), or comprised of distinct cells (neuronal doctrine, the correct answer). The controversy between the reticularists and the proponents of the neuronal doctrine would have been ongoing in 1898, when Golgi presented his observations of the perineuronal net, "a continuous envelop that enwraps the body of all the nerve cells extending to the protoplasmic prolongements up to second and third order arborizations" (2). Contemporaries of Golgi followed up on his initial observations (most notably: Donaggio, who described the filamentous pattern within the net; Bethe, who differentiated between perineuronal nets and the more diffuse extracellular matrix; and Held, who proposed shared components, and a glial origin for the perineuronal and diffuse nets) (2).

But this initial period of study came to an abrupt halt by the entrance of Ramon y Cajal onto the scene.

Cajal was of the opinion that the perineuronal nets observed by Golgi (and everyone else), were nothing more than a coagulation artifact produced during the staining process that bears Golgi's name.

According to Carlo Besta (neurologist and psychiatrist), Cajal's victory on the subject of the neuronal doctrine made his word automatically superior that of Golgi, Held, Bethe and Donaggio. "It has been sufficient that Cajal claimed that [perineuronal] and diffuse nets were an artifact ... and most of the scientific world took no further interest in the matter" (3).

Although a few individuals continued to investigate the perineuronal net (including G.B. Belloni, who observed structural changes in both perineuronal and diffuse nets in humans suffering from dementia, gliosis and psychiatric diseases), in general, research stagnated until the 1980s, when the advent of better staining techniques made it possible to reveal perineuronal nets as real structures, and not mere artifacts of an imperfect staining technique (2).

The modern study of the perineuronal nets is still in its infancy (as is, for that matter, all of neuroscience), but as the previous section attests, several studies have hinted at a role in restricting plasticity, maintaining a stabilized environment for neuronal (and glial) function.

Which brings me to the final note of this post, which also happens to be the trigger of my recent readings into the history of the perineuronal net.

A Modern "Prophecy"

Back in June, a lab mate of mine passed around a PNAS article with a provocative title, and an attention-grabbing author.

The article? "Very long-term memories may be stored in the pattern of holes in the perineuronal net." By (Nobel Prize Winner) Roger Tsien.

So, I'm not going to do a full analysis of Tsien's article, which reads more like an RO1 than anything else. His basic thesis is based on the assumption that long-term memory storage within the human brain necessarily involves a long-term molecular substrate. Tsien identifies the molecules that make up the perineuronal net as likely candidates for the molecules that encode our long-term memories. He steps beyond the more comment supposition that the perineuronal net is a permissive structure for synaptic stability, claiming that "very long-term memories are stored in the pattern and size of holes in the PNN [perineuronal net]…" (4). Lest we confuse his proposal with the more common understanding of the function of the perineuronal net, Tsien writes: "reviews on the PNN propose permissive, supportive roles… analogous to the importance of insulation on the wiring inside a computer: essential for function but not where bytes are dynamically stored." (4) Tsien maintains that the perineuronal net is the storage device for long-term memories, the location where "bytes are dynamically stored."

Tsien's hypothesis, which he compares to Watson and Crick's theory of DNA, is severely lacking in experimental evidence. Thus the PNAS article, in which Tsien describes experiments he believes will test his hypothesis. Having read the article abstract-to-bibliography multiple times, I remain unconvinced that the proposed experiments would be sufficient to support Tsien's theory. Will the experiments prove insightful? Does the perineuronal net directly encode bytes of long-term memory? We may have to wait another 100 years to find these answers, as Tsien seems to have no plans to actually conduct the experiments he proposes. Instead, he hopes that other scientists will use his PNAS paper as a roadmap for future experiments. Extending his Watson and Crick metaphor, he calls for the Rosalind Franklin's of the world to supply him with the experimental data his hypothesis demands; "Perhaps, in a few years, at least one prophecy can be vindicated" (4). As someone who has, uh, heard of Franklin, I wonder if Tsien realizes what a raw deal his is proposing for his fellow scientists.

Sources

  1. McRae and Porter (2012). The perineuronal net component of the extracellular matrix in plasticity and epilepsy. Neurochemistry International 61: 963-972. Link

  2. Vitellaro-Zuccarello, De Biasi, Spreafico (1998). One hundred years of Golgi's "perineuronal net": history of a denied structure. Ital J Neurol Sci 19:249-253. Link

  3. Besta C (1928) Dati sul reticolo periferico della cellula nervosa, sulla rete interstiziale diffusa e sulla loro probabile derivazione da particolari elementi cellulari. Boll Soc It Biol Sper 3:966-973

  4. Tsien (2013). Very-long term memories may be stored in the pattern of holes in the perineuronal net. PNAS 110(30): 12456-12461. Link

 

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog