One-track mind

I promised myself so faithfully that I would write about neuroscience. Or at least neural-immune interactions. But it’s tricky to think about other topics when something exciting happens in vaccine research and there’s this public outlet just sitting there asking for science news… So here follows a blog post about vaccines with the crowbarred excuse that cerebral malaria is a thing that happens. Vaccines work. In terms of public health benefits, I’d put vaccination up there with antibiotics and soap. This graphic from Leon Farrant [1] earlier in the year gives us a clear idea of just how effective the vaccination programmes of the 20th Century have been. But we don’t hear much these days about new vaccines coming onto the market. One reason for this is that contemporary vaccine researchers have been left some tough nuts to crack. The viruses, bacteria and parasites still causing significant disease have evolved myriad ways to evade not only the natural immune system, but also traditional vaccination strategies. Gone are the days when scientists like Maurice Hilleman could develop multiple vaccines in a single career but, with a perseverance verging on tenacity, modern vaccine researchers are beginning to chip away at these remaining problems. Last week saw the publication of a new vaccine trial performed by Robert Seder’s group at the National Institute of Allergy and Infectious Disease, describing an important step in the development of a malaria vaccine.

 

Malaria is one of the toughest nuts around. This parasite has a long evolutionary history with humanity so has intimate infective knowledge of the human body. Its lifecycle is not only complicated, but involves several different stages (analogous to eggs, larvae and adults) all of which look different and live in different bodily tissues, not to mention different hosts.

Malaria lifecycle

Just look at how complicated this little beast is! This kind of infection makes life very difficult for the immune system. Whether in response to an infection or a vaccine, white blood cells rely on the fact that infectious agents look very different from humans, and that the same infectious agent infecting for a second time will look the same as it did the first time. Like a master of disguise changing outfits and donning false moustaches, parasites can change their surface proteins every few days, leaving the immune system far behind in its attempts at recognition. Not only that, but they’ll squirrel themselves away inside different cells to avoid even coming into contact with white blood cells.

So how do we get ahead of this sneaky beast? Seder’s group think they have the answer [2]. Or at least one potential answer. Instead of trying to remove the parasite once it has a foothold in the body, their PfSPZ vaccine targets sporozoites – that is the parasite as it looks when it’s first injected into the bloodstream by a mosquito. PfSPZ stands for Plasmodium falciparum sporozoite. P. falciparum is one of the most common strains of malaria and is used in malaria research because (in a very rare move) the CDC allow the deliberate infection of humans with this strain in a model known as controlled human malaria infection (CHMI). Here, scientists get to test malarial vaccines in healthy humans by deliberately infecting them with malaria after vaccination and waiting to see if they get sick. To prevent full-blown disease, subjects are treated with approved anti-malarials after the test whether they get sick or not, but they usually do. Up to now, malaria vaccines have proved broadly unsuccessful. The only known method providing robust protection to date involved leaving the task of immunisation to the mosquitoes. In that study [3] conducted by the US military, scientists took mosquitoes infected with malaria, exposed them to radiation to render the parasite uninfectious and then let them bite human subjects. Over 1000 times. After 1000+ bites, over the course of up to 10 years, people were protected from subsequent CHMI. In light of the obvious difficulties and objections to this, Seder and his colleagues have now refined the original technique. It still involves mosquitoes infected with malaria, and those mosquitoes are still irradiated to knock out the parasite. But now, harmless parasites from several thousand mosquitoes can be isolated, purified and injected into people in a faster, more controlled, and less uncomfortable way. After one injection (rather than 1000 mosquito bites) the subjects are protected from subsequent CHMI. One of the things that makes this new study stand out is that the vaccine tested fully protected everyone who was given the highest dose. That is to say that the vaccine protects everyone, even when they’re deliberately injected with infectious malarial parasites.

This is exciting news for anyone in the vaccine field but still comes with some caveats. The participants in this trial were infected with malaria just 3 weeks after their last immunisation so we have no idea how long protection will last. The immunisation itself is also an issue because it only works if you inject it directly into the bloodstream (intravenously). Most vaccines are given into the muscle or skin, which is a relatively unskilled procedure. Intravenous injection requires more expertise and comes with more risk, so rolling out an intravenous vaccine to areas with poor infrastructure and limited skilled medical professionals will be tough. Having said all that, and taking into account yet more caveats given by the authors themselves, this really is exciting news. Malaria kills 0.5-1 million people every year 86% of whom are of children under 5 [4]. This vaccine may have its limitations, but it’s worth a shot.

 

References:

[1] http://www.behance.net/gallery/Vaccine-Infographic/2878481

[2] http://www.sciencemag.org/content/early/2013/08/07/science.1241800.full

[3] http://jid.oxfordjournals.org/content/185/8/1155.full                           OPEN ACCESS!

[4] http://www.who.int/malaria/publications/world_malaria_report_2012/wmr2012_no_profiles.pdf

Why most neuroscience findings are false, part II: The correspondents strike back.

hoth.jpg

In my May post for this blog, I wrote about a piece by Stanford professor Dr. John Ioannidis and his colleagues, detailing why, as they put it "small sample size undermines the reliability of neuroscience." [See previous blog post: Why Most Published Neuroscience Findings are False] As you might imagine, Ioannidis's piece ruffled some feathers. In this month's issue of Nature Reviews Neuroscience, the rest of the neuroscience community has its rejoinder.

Here is a brief play-by-play.

Neuroscience needs a theory.

First up: John Ashton of the University of Otago, New Zealand. He argues that increasing the sample size in neuroscience is not the most important problem facing analysis and interpretation of our experiments. In fact, he says, increasing the sample size just encourages hunting around for ever-smaller and ever-less-meaningful effects. With enough samples, any effect, no matter how small, will eventually pass for statistically significant. Instead, he believes neuroscientists should focus on experiments that directly test a theoretical model. We should conduct experiments that have clear, obviously-nullifiable hypotheses and some predictable effect size (based on the theoretical model). Continuing to chase after smaller and smaller effects, without linking them to a larger framework, he argues, will cause neuroscience research to degenerate into "mere stamp collecting" (a phrase he borrows from Ernest Rutherford...who believed that "all science is either physics or stamp collecting".)

Ioannidis and company reply, first by agreeing that having a theoretical framework and a good estimate of effect size would be great, but these ideals are not always possible. They also state that sometimes very small effects are meaningful, as in genome-wide association studies, and that larger sample size will provide a better estimate of those effect sizes.

“Surely God loves the 0.06 nearly as much as the 0.05”

Next up: Peter Bacchetti of the University of California, San Francisco. Like Ashton, Bacchetti believes that small sample size is not the real problem in neuroscience research. He identifies yet another issue in our research practices, however, arguing that the real problem is a blind adherence to the standard of p = 0.05. Dichotomizing experimental findings into successful and unsuccessful bins (read...publishable and basically unpublishable bins) based on this arbitrary cutoff leads to publication bias, misinterpretation of the state of the field, and difficulty generating meaningful meta-analyses (not to mention the terrible incentive placed on scientists to cherry-pick data, experiments, animals, analyses, etc. that “work”).

Ioannidis and colleagues essentially agree, saying that a more reasonable publication model would involve publishing all experiments’ effect sizes with confidence intervals, rather than just p-values. As this "would require a major restructuring of the incentives for publishing papers" and "has not happened," however, Ioannidis and company argue that we should fix a tractable research/analysis problem and do our experiments with a more reasonable sample size.

Mo samples mo problems.

Finally: Philip Quinlan of the University of York, UK. Quinlan cites a paper titled "Ten ironic rules for non-statistical reviewers" to make the argument that small sample size studies really aren't so bad after all. Besides, he says, experiments that require a large sample size are just hunting for very small effects.

Ioannidis and company essentially dismiss Dr. Quinlan entirely. They respond that underpowered studies will necessarily miss effects that are not truly huge. Larger studies allow a more precise estimation of effect size, which is useful whether the effect is large or small, and finally, what constitutes a "meaningful" effect size is often not known in advance. Such an assessment depends entirely on the question and data already at hand.

There you have it, folks! If you have any of your own correspondence, feel free to post it in the comments section.

The Nature Reviews Neuroscience Commentaries

Commentary by John Ashton

Commentary by Peter Bacchetti

Commentary by Philip Quinlan

Response by Button et al.

Art Exhibit Extravaganza 2013: a postdoc appreciation week event

An email soliciting submissions for an upcoming visual arts exhibit, in celebration of Postdoc Appreciation Week, was recently sent to the NeuWrite West mailing list. The event sounds like fun, and I know we've got some talented postdoc's out there, so I've reposted the message in it's entirety below.

Art Exhibit Extravaganza flyer_08092013

Dear  Postdocs,

The SUPD Art Committee is organizing a Visual Art Exhibit for the postdoc community, “Art Exhibit Extravaganza 2013”. Postdocs (plus significant others) from all Stanford Schools and affiliated institutions are encouraged to apply.

This is a unique opportunity for you to share your artistic talent with the postdoc community at large and to expand your horizons.

The SUPD Art Exhibit Extravaganza will keep you entertained during Postdoc Appreciation Week ( September 16 -20th) at The Lorry I. Lockey Stem Cell Research Building. It will feature paintings/drawings, photography/industrial design, sculpture/pottery and mixed-media works. There is no specific theme, so be creative!

Attached is the flyer of the event. If you are interested in submitting your art work, please click on the link below and fill out the online submission form. The deadline for submission is Friday, August 23rd . Email the SUPD Art Committee at SUPDART@gmail.com if you have any questions.

What are you waiting for? Apply now… Online submission link

Sincerely, SUPD Art Committee

Ermelinda Porpiglia Jun Yan Luqia Hou Ramon Whitson Van Dang Viola Caretti

Antoine de Morree Catherine Gordon

2 Comments

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Ask a Neuroscientist: How to Train Your Brain

TreadmillBrain In this edition of Ask a Neuroscientist, we’ll answer two questions that address a similar principle: Can you train to have a better brain?

The first question comes from Allyson Thomley, who writes:

“I am an elementary science teacher seeking to reach a better understanding of how the brain works. As a novice, it has been difficult to sort out the pseudoscience from valid, data-supported information. Sadly, there is a great deal of misinformation circulating amongst teachers who are genuinely trying to incorporate brain research into their practice.

One such claim that I have come across more frequently has to do with exercises that 'cross the midline.' It is suggested that by engaging in activities in which the right arm or leg is crossed over to the left side, connections between the right and left hemispheres of the brain are strengthened. Any grains of truth here?”

This idea appears to have originated (or is at least most heavily propagated) by Paul and Gail Dennison and their commercial learning program called Brain Gym. They call their program “educational kinesiology,” and claim that engaging in activities that “recall the movements naturally done during the first years of life when learning to coordinate the eyes, ears, hands, and whole body” can dramatically improve concentration and focus, memory, academics, physical coordination, relationships, self-responsibility, organization skills, and attitude.

Those are quite extraordinary claims, and as the saying goes, extraordinary claims require extraordinary evidence, of which they provide little to none. In fact, there are no peer-reviewed, controlled studies testing whether or not these exercises do anything at all. All of the papers they use to support their claims are self-published in the journal The Brain Gym Global Observer. On their website, they address why there are no peer-reviewed articles supporting their claims, explaining that because a scientific study would require that some students receive the Brain Gym training (the experimental group), and some receive no training or a different kind of training (control group), it would be unethical to deprive some student’s of the Brain Gym training.

Any study like this would only last a few weeks or a few months at most, so this excuse is pretty weak, and is a huge red flag with regard to the validity of their claims. That being said, we can’t completely rule out the general idea that engaging in crossing the midline exercises has a positive effect on learning because this idea has not been rigorously tested.

The underlying science – that performing an activity that simultaneously engages both cerebral hemispheres can improve cognition – does appear to be true. The best studied example of this is musicians who began training during early childhood. Neurons on either side of the cortex send axons across the midline, which then make synapses with neurons on the other side. The axons are covered in a white substance called myelin, which acts as an insulator, protecting the electrical communication between neurons from leakage, and increasing the speed at which the signal can travel down the axon. This collection of axons between the midline is called the corpus callosum, and research has shown that the corpus callosum is larger in early-trained musicians compared to late-trained musicians and nonmusicians, especially if the training began before the age of 7.

The hypothesis is that because musical training involves the coordination of multiple modalities – i.e. taking visual and auditory input (reading and listening to music, respectively) and coordinating it with motor output (playing the instrument) – the connections between these brain areas become stronger and more tightly connected, resulting in better sensorimotor integration. And indeed, early-trained musicians have better spatial and verbal memory, attention, mathematics skills, and perform better on other tasks involving the integration of multiple sensory and motor inputs. You can find a nice review on the topic here: The Musician's Brain. 

So, while the Brain Gym technique does not seem like a good candidate, encouraging your students to learn an instrument could go a long way in improving their cognitive functions. Unfortunately, adults who learn an instrument do not see the same improvements.

Our second question comes from Kelly Bertei, who asks:

“Does playing games to improve working memory work? If so, since my brain is only so big, would other parts of my brain reduce in functioning to accommodate for increases in working memory?”

The literature on this is very mixed – some reports show that these games can lead to increased working memory and other measures of cognitive function, whereas other studies show no difference in performance.

For example, in a paper published earlier this year in the online journal PLOSone, researcher Rui Nouchi and colleagues asked 34 volunteers to play either the brain training game Brain Age (which the authors created and profit from, it should be noted) or Tetris. They played for 15 minutes a day, 5 days a week for 4 weeks. The participants were then tested on cognitive performance before and after the training period. Interestingly, both groups performed better after the training than before, and the Brain Age group showed greater improvements on executive functions, working memory, and processing speed compared to the Tetris group, while the Tetris group showed greater improvements on attention and visuo-spatial ability.

So these results seem to support the idea that brain training exercises can improve some aspects of cognitive function. However, another paper published in 2013 in the journal Computers in Human Behavior (which is a real journal, and actually looks pretty awesome) showed no improvement in cognitive function after 3 weeks of training. In this study, volunteers were asked to play either Brain Age, Dr. Kawashima’s Brain Training (a game they designed themselves), Phage Wars (an online strategy game), or no game at all. They were tested on cognitive performance before training, immediately after training, and a week after training had ceased. Most of the groups showed no significant difference in performance, positive or negative, across all time points, the one exception being the Phage Wars group, who performed significantly worse in the follow-up test than they did immediately after the training period.

That is only two papers, there are many more out there, some showing that these brain training games do improve cognition, and some showing that they do not. Basically, science still hasn’t figured this one out yet.

Lest you think there is nothing you can do to make your brain work better, there is one activity that has been shown to improve working and long-term memory, improves mood, staves off dementia in old age, and in general, makes your brain and body happy – cardiovascular exercise. Exercise triggers a molecular cascade in the brain that ultimately results in an increase in synaptic plasticity, that is, the ability of the synapse to strengthen or weaken in response to stimuli. This, in turn, is believed to improve learning, memory, and other forms of cognition.

Exercise also results in an increase in the birth of new neurons in a part of the brain important for learning and memory called the hippocampus. Which brings me to the second part of your question, whether improving memory would result in a decrease in function of another brain area. Cardiovascular exercise does in fact result in an increase in the volume of the hippocampus by about 2%, and it is a reasonable assumption to think that would draw resources away from another brain area. But as we saw with the early-trained musicians, increasing a brain structure could result in better functioning of neighboring regions as the new neurons make more connections. It’s unknown what the limits of this is, though, and as far as I could tell, no one has gone looking for deficits in other brain regions following the increase in hippocampus size, so it’s definitely possible.

Now let’s all go for a run!

If you have a question for one of our neuroscientsist contributors, email Astra Bryant at stanfordneuro@gmail.com, or leave your question in the comment box below.

Studying Sleep the High-Throughput Way

moskaleva814.png

“Sleep remains one of the least understood phenomena in biology,” reads the first sentence of a recent review in Cell (1). Though humans spend a third of their life sleeping, neurobiologists don’t really understand how or why we sleep. The scientific method proceeds from a hypothesis, and formulating a hypothesis requires some initial information, which is not really there for sleep. What can scientists do to gather this initial information? A favorite approach of molecular biologists over the past forty years has been the high-throughput screen, a fancy term for trying a bunch of things and seeing if they affect the process of interest. Until recently, it was impractical in neurobiology because it was too hard to collect and analyze the required large amounts of data. However, advances in computing power and the availability of a certain device called the Zebrabox, which I’ll explain later, made it possible for Jason Rihel and colleagues to apply the high-throughput screen to the neurobiology of sleep (2). To paraphrase from that paper’s abstract, the authors set out to find new drugs that affect sleep and to discover if any known proteins have a previously unknown effect on sleep. I am not a sleep expert or even a neurobiologist, but my background in systems biology has taught me a thing or two about high-throughput screens. Below I will explain what makes a good high-throughput screen, what Rihel and colleagues have accomplished, and what they could have done better. A good high-throughput screen generates hypotheses. It fills the initial void of knowledge with information that can be used to perform more targeted experiments. To perform a screen, biologists set up an experiment that reduces what they are studying to some measurement, change one thing in their experiment and make the measurement, change another thing and make the measurement, and repeat this hundreds or even thousands of times. To keep themselves from going crazy, they try to set up a simple measurement, so that repeating it ad nauseam would not be too tedious, time-consuming, or labor- and resource-intensive. However, the measurement shouldn’t be so simple that it doesn’t relate back to what is being studied. A classic example of a smashingly successful screen is the work of Lee Hartwell and colleagues on cell division cycle mutants in budding yeast in the 1970s (3). They were studying how cells divide. A yeast cell assumes a sequence of distinctive shapes as it divides, so they reduced cell division to whether a cell has a normal or abnormal shape. The things that they were changing were genes. By mutagenizing yeast, examining the shape of the resulting cell, and then mapping the mutant locus, they discovered many genes that affected cell shape. One of their hits, named cdc2, was revealed by subsequent targeted experiments to be the master regulator of how cells of all eukaryotic model organisms divide and is intensively studied to this day. Without the screen for yeast mutants of cell shape, no one may have ever connected this particular gene with the cell division cycle.

What did Rihel and colleagues do in their screen? First, they defined sleep simply enough to make it amenable to screening. Until a decade ago, the scientific definition of sleep was an altered state of electrical activity in the brain, as measured by sticking electrodes onto the scalp (1, 4, 5). Sticking electrodes to scalps is fine for making a handful of measurements, but doing it enough times for a screen is impractical, especially since the animals involved, i.e. primates, monkeys, rats, mice, or birds, are relatively large and expensive to maintain. Rihel and colleagues used the inexpensive and easy-to-maintain zebrafish. They defined sleep as lack of movement, following a push begun in the 1980s to define sleep as a behavior (1). Since, they were looking for new drugs, the things they changed were chemicals that they added to the aquarium water. Detecting lack of movement may seem like a simple measurement to make, but it’s not. Back in Lee Hartwell’s days, some poor grad student would have actually been watching the zebrafish or movies of zebrafish for inactivity. Luckily, technology has progressed, and Rihel and colleagues were able to buy a big blue box called the Zebrabox sold by a company named Viewpoint. The Zebrabox is equipped with a 96-well-plate holder, a video camera, and custom video processing software called Videotrack. Rihel and colleagues placed their more than 50,000 zebrafish larvae into 96-well plates, added one of over 5000 chemicals into each well, popped the plates inside the Zebrabox, recorded movies, and analyzed these movies for lack of motion. The chemicals that made zebrafish move more or less were considered hits. The targets of the chemicals, gleaned from annotations in databases and manual literature searches, were by extension implicated in sleep regulation.

The Zebrabox

[youtube]http://www.youtube.com/watch?v=ot48aM8Isvk[/youtube]

Larval zebrafish locomotor activity assay (A) At four days post fertilization (dpf), an individual zebrafish larva is pipetted into each well of a 96-well plate with small molecules. Automated analysis software tracks the movement of each larva for 3 days. Each compound is tested on 10 larvae. (B) Locomotor activity of a representative larva. The rest and wake dynamics were recorded, including the number and duration of rest bouts (i.e. a continuous minute of inactivity, (7)), the timing of the first rest bout following a light transition (rest latency), the average waking activity (average activity excluding rest bouts), and the average total activity. Together, these measurements generate a behavioral fingerprint for each compound. (Rihel et al, 2010)

So, how does the screen performed by Rihel and colleagues do in terms of generating hypotheses? Some chemicals they tested, including nicotine and mefloquine (see my previous post for more on this drug) make zebrafish move differently. However, their results have little credibility because they do not justify why lack of motion in zebrafish is definitely sleep, and not tiredness or death. Also, it is debatable how good the drug target annotations are. Some of the more surprising new sleep regulators, like inflammatory cytokines, may be genuine. Or they may be just the only annotated targets of drugs, while the effect of a drug on sleep may be due to some side effect unrelated to the annotated target. I hope that the hits are all genuine and that this work leads to new insights in sleep. But tellingly, a review of recent sleep literature (1) focused on how Rihel and colleagues confirmed what has already been known, rather than on how they may have discovered something new.

Part of the problem with credibility has to do with their black-box, or rather blue-box, approach. They put zebrafish into the Zebrabox and out came all their data for the Science paper. Without knowing how the video processing software Videotrack works, and without a positive control of a drug that is known to make zebrafish move a lot and a negative control of a drug that is known to sedate them, I can only trust that Videotrack gave Rihel and colleagues the result that they claim. Automation can make previously impossible experiments possible, but if the results are ambiguous and untrustworthy, it’s of little value.

In summary, Rihel and colleagues applied high-throughput screening, responsible for groundbreaking discoveries in other areas of biology, to sleep. Notably, they were able to use a complicated measurement of zebrafish behavior because they used an automated measurement and analysis device. But the automated device also robbed their results of credibility. Thus, their paper makes me wish for someone to do a similar screen but in a more transparent way and with a more precise experimental definition of sleep. Then we could make some hypotheses and kick-start the scientific method in the study of sleep.

 Sources
  1. Sehgal A and E Mignot. (2011). “Genetics of Sleep and Sleep Disorders.” Cell. 146:194-207. Paywall.
  2. Rihel J et al. (2010). “Zebrafish Behavioral Profiling Links Drugs to Biological Targets and Rest/Wake Regulation.” Science. 327:348-351. Paywall.
  3. Hartwell LH et al. (1973). “Genetic Control of the Cell Division Cycle in Yeast: V. Genetic Analysis of cdc Mutants.” Genetics. 74: 267-286. Open access.
  4. Zimmerman JE et al. (2008). “Conservation of sleep: insights from non-mammalian model systems.” Trends in Neurosciences. 31:371-376. Paywall.
  5. http://en.wikipedia.org/wiki/Electroencephalography

 

Arl-8: The clasp on a fully-loaded synaptic spring.

A series of papers from Kang Shen's lab, which I have recently joined, sheds light on a key and fundamental step in the process of transporting pre-synapse forming proteins down the axon and forming functional synapses at exactly the right locations.  Here, I’ll be reviewing the first in this series, by Klassen and colleagues, from 2010.  

Neurons communicate with each other by sending electrical signals down axons, and across synapses to target other neurons. These synapses along and at the ends of axons can be extremely far away from the cell body, where most of the proteins are created. So both transporting synaptic proteins down the axon and forming synapses at the correct locations are two formidable challenges for the developing neuron. A 2010 paper from Kang Shen’s lab shows that these two processes appear to be intricately linked. The paper provides key evidence that instead of pre-synaptic proteins being transported in separate pieces, and assembled from scratch onsite as a functioning synapse, all the major protein components of the pre-synapse are transported together, ready for quick and easy assembly upon arrival. The paper shows that Arl-8 is the clasp that keeps a lid on the loaded-spring like capacity of these pre-synapse cargos that are in transit, preventing them from jumping off the transport train and assembling functional synapses.

Each neuron in the brain connects to only a small subset of the other neurons in the brain, and the selection of the appropriate target neurons is crucial to forming a well functioning brain. After an axon has reached its target destination, it will connect with other neurons in the target area by forming two types of synapses.  The axon can form terminal synapses, which are the synapses at the very ends of axons, and which are depicted below at the very tips of the axon branches, or the axon can form en passant synapses, which are the bud-like bright spots along the axon, both of which are depicted below in this image of an axon that is targeting the monkey visual cortex.

DL_Fig1

 

In both cases, there are two fundamental challenges that the neuron needs to solve. 1) Neurons somehow need to transport all these synapse-forming proteins from the cell body down the axon to the pre-synaptic specializations in the target area, either to terminal synapses or en passant synapses.  2) Once in the target area, the synapse-forming proteins somehow need to form the right number of en passant and terminal synapses, and in the right locations too!  What a fantastically complicated cell biology problem! Yet, somehow, amazingly, evolution has come up with mechanisms to enable these synapses to form in their correct locations.

Now, how to go about deciphering these mechanisms of axon transport and synapse formation? In the mammalian brain, most neurons are very complicated; they send their axons very long distances in the brain through an absolute forest of dense axon bundles, only to arrive at a destination, composed of cell bodies and their dendritic trees that are just as dense and complex. However, many of the same cell biological mechanisms at work in the mammalian brain are also present to a similar extent in the brains of simpler creatures as well, such as the tiny worm known as C. Elegans, which has only 302 total neurons. One of the C. Elegans neurons, the DA9 motoneuron makes exactly 25 en passant synapses along its single, unbranched axon as it courses along the dorsal nerve cord (shown in Panel A, below), and is an excellent model to study these questions of synapse assembly.

DL_Fig2

Now, in order to begin to understand the mechanisms of pre-synapse axonal transport and formation, one needs to be able to examine the roles of individual proteins in these processes. By disrupting one protein building block in this synapse-formation process, one at a time, we can see what role each of these protein actors plays in the exquisite biological “production” of assembling a pre-synapse. Klassen and Shen therefore created a bunch of mutant worms by chemically inducing mutations in worms. They then took these mutant worms and examined the pattern of distribution of a specific pre-synaptic protein called Rab-3 in the DA9 axon, and if there were any irregularities in the Rab-3 distribution, they would examine the worm’s genome to find the mutant gene, whose defect was responsible for the irregular distribution.

Rab-3 is a protein, which closely associates with small bubbles of membrane at the pre-synaptic specialization containing packets of neurotransmitter, called synaptic vesicles, and helps release these vesicles so they can travel across the synapse. Rab-3 is present in small amounts all along the axon, but large, bright clusters of Rab-3, which are visible in white, occur at sites where synaptic vesicles accumulate. Such vesicle accumulation indicates the presence of a presumptive pre-synaptic site.

By keeping track of where Rab3 clusters would form, Klassen and Shen could examine different mutants to see where the pattern of synaptic vesicle clusters appeared differently from the evenly spaced 25 clusters that normally form along the middle of the axon. Using this strategy, they were able to isolate a mutant worm, where the Rab-3 marked vesicle clusters formed too close to the cell body (Panel C, bottom), and did not get far enough down the axon to form the 25 synapses along the axon like they did in normal worms (Panel B, middle). This mutant had a defective gene encoding a small protein called Arl-8.

In the axons of Arl-8 mutants, the Rab-3 clusters were found to be located very close to the cell-body of the neuron, and much fewer of these clusters were found toward the middle and the end of the axon. It was as if all the synaptic vesicle proteins, marked by the presence of Rab-3, jumped off the transport-train way too early along the axon in the Arl-8 mutant worms. Thus, Arl-8 seems necessary to prevent premature aggregation of Rab-3, and to ensure proper transport of synaptic vesicle associated proteins.

Now, the previously discussed evidence implicated Arl-8 in preventing the aggregation of one of the two major classes of pre-synaptic proteins, the vesicle-associated proteins. As explained earlier, Rab-3 is a member of the class of pre-synaptic proteins called synaptic transport vesicle proteins, which play a role in the primary function of the pre-synapse. Yet, there is an entirely different class of proteins, called active zone proteins, which are a bunch of sticky organizing proteins that collectively form the structural backbone of the pre-synapse. One can think of active zone proteins as being like the engineers and construction workers, who assemble and provide structural support for a missile battery, while the synaptic vesicle release proteins can be thought of as an entirely different group: the soldiers who operate the missile machinery, helping set off the fuse.

Previously, it was thought that each of the two different kinds of proteins were transported down the axon separately from one another. So, all the proteins in the active zone group of proteins would be transported together in the same vesicles, and all of the vesicle associated proteins, like Rab-3, would be transported together, in a class of vesicles that were completely separate from those that transported vesicle associate proteins. In other words, people thought that the engineers travelled to the site of missile assembly in one railroad car, and that the soldiers travelled to the missile site in a totally separate railroad car. However, a second finding concerning Arl-8 challenged this theory that both sets of proteins are transported separately, and then jointly assembled at the synapse.

The second finding is that when one of these sticky, active zone proteins is mutated in addition to Arl-8, the Rab-3 aggregates in the DA9 axons do not have as severe a defect in distribution along the axon. In fact, these double-mutants had a pattern of Rab-3 puncta that looked closer to that of the normal worms, with little bright dots spread out along the entire axon. Klassen and Shen inferred that these sticky active zone proteins were causing Rab-3, and other synaptic vesicle associated proteins to aggregate and cluster together. The fact that mutations in these sticky, clustering, active zone proteins result in less Rab-3 synaptic puncta getting ‘caught’ early in the axon, suggests that both classes of proteins are actually transported together.

This finding provided evidence that instead of transporting the active-zone pre-synapse backbone proteins in certain types of vesicles, and separately transporting vesicle associated proteins in other vesicles, then assembling an entire synapse once all the packaged material arrived onsite, that in fact both types of vesicles were transported together. Or, to return to our analogy, the engineers, military men, and all the components of a mobile missile assembly are transported to a target site together. So, in essence, all the components of the pre-synapse are transported together, ready for quick assembly and deployment when they reach the correct spot.

The reason that this idea of co-transport is cool is that it totally changes the way we might think about how pre-synapses are set up. Instead of building a pre-synapse from scratch each time the neuron wants to form a connection to another neuron, and having to set up all the right active zone and neurotransmitter vesicle release proteins onsite, there might be partially pre-assembled synaptic machinery that’s transported down the axon. And then, when these transported vesicles get to the right place, they are immediately ready to spring into action and form a fully functioning synapse. Arl-8 then is the clasp that prevents these pre-synaptic proteins that are all being transported together from spontaneously aggregating and springing into action to form a synapse too early down the length of the axon.

An immediately exciting future question that is suggested by this research is how Arl-8 might interact with different proteins that set up synapses in particular locations. Perhaps there are other proteins that inhibit the action of Arl-8, in effect releasing this clasp on synapse formation? Perhaps also, there are other proteins, which counterbalance Arl-8, and actually promote the clustering of pre-synaptic proteins and the formation of synapses? Is Arl-8 part of some master switch, which is modulated to set up pre-synapses at specific locations in the brain?  If so, then discovering other proteins, which interact with Arl-8, could give us clues into questions like how an axon from the part of the brain that responds to vision knows to form lots of synapses onto face-processing neurons, and not other irrelevant neurons located close by.

Linky and the Brain: Podcast Edition

Linky-and-the-Brain-e1367303270155.png

Sometimes, doing science is mind-numbingly boring. Slicing your 20th acute brain slice. Re-sectioning your 5th visual cortex. Counting your millionth cfos-positive neuron.

Last week, I started listening to podcasts while patching neurons. In the process I rediscovered a science-themed delight, that I'd like to share with you all.

(Warning: the podcast I am about to recommend may preclude the fine motor control required to successfully patch a 10 um diameter neuron.)

imc

From BBC Radio 4, The Infinite Monkey Cage is a panel show about science featuring the talented and charming duo of Professor Brian Cox (physicist, nature program host), and Robin Ince (comedian). These hosts are joined by a rotating panel of diverse guests, including both scientists and comedians.

The episode that inspired me to type this post, "What is Death?", features comedian Katy Brand (who has a degree in theology from Oxford), biochemist Nick Lane and forensic anthropologist Sue Black. As a preview of the witty banter you can expect from an episode of The Infinite Monkey Cage, here is some dialogue I have transcribed from that first episode.

During a prolonged discussion regarding what is the definition of "alive", and in reference to an even longer discussion of whether strawberries are alive or dead:

Nick Lane: If you put a plastic bag over your head, you’ll be dead in about a minute and a half.

Brian Cox: But I put strawberries in bags all the time.

During a discussion on reproductive fitness:

Are pandas living in a Beckett play?

Introducing an upcoming episode, recorded at the Glastonbury Festival:

Robin Ince: We are going to be discussing quantum theory on the Many Worlds stage to an audience who will already be approaching a point of questioning their own existence, so therefore we will be using, in front of their cider-drenched eyes, imaginary numbers in an infinite dimensional phase space, and question the possibility of free will in a probabilistic universe. And that’s before we lead them into the Mexican Wave.

Brian Cox: Or the Mexican Particle.

Currently, series eight of The Infinite Monkey Cage is ongoing. Episodes from the current and previous series can be downloaded from iTunes, or can be listened to via web browser at the BBC Radio 4 website.

As of this moment, 42 episodes are available. Episode titles/subjects include: "Does Size Matter", "Oceans: The Last Great Unexplored Frontier", "Science Mavericks", "Is Cosmology Really a Science", "So You Want to Be An Astronaut", and "I'm a Chemist Get Me Out of Here".

Comment /Source

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

Open Channel: Advice to Pre-Quals Grad Students

Hey folks. This week, I thought we could tap into the accumulated experience of those Neuroblog readers who have made an excellent life decision, and entered graduate school. In honor of my first Neuroscience Superfriends* seminar, let’s discuss the many potential answers to the following question:

What advice would you give to first-year students in a Neuroscience PhD program?

I’ve had a response to this question ready and waiting for the past 2 years. Here it is, verbatim from communications with the talented graduate student who introduced me before my seminar.

Over the next few years, your science is going to fail. A lot. It'll suck. Like, soul-crushing levels of suckitude. Lest you take all that failure to heart, get yourself an external control. Pick an activity, one you aren't already extremely good at. An activity where there is a reasonable chance that as you continue to do said activity, you'll get better at it. Practice that activity. Watch yourself get better. Remind yourself that you are capable of learning; that working at something will make you better at it. Then, when your experiments fail for the 50 billionth time in your fourth year, you can remind yourself that you aren't a complete fuck up. Science just sucks, sometimes.

So that’s my bit of advice.

Senior graduate students (or not so senior students), what is your advice? What do you all think is the one bit of advice you’d give to the newly minted Graduate Student in Neurosciences?

The comment section is open. Send in your thoughts!

 

*A seminar series wherein senior-level graduate students give 30 minute talks to the Stanford Neuroscience PhD community.

2 Comments

Astra Bryant

Astra Bryant is a graduate of the Stanford Neuroscience PhD program in the labs of Drs. Eric Knudsen and John Huguenard. She used in vitro slice electrophysiology to study the cellular and synaptic mechanisms linking cholinergic signaling and gamma oscillations – two processes critical for the control of gaze and attention, which are disrupted in many psychiatric disorders. She is a senior editor and the webmaster of the NeuWrite West Neuroblog

You work on stem cells, right? OK, here's your space suit.

The-author-as-astronaut-crop.png

I’m really only half joking when I say I want to be an astronaut when I grow up. When experiments aren’t going well, my friends and I discuss the various ways in which we could convince the International Space Station (ISS) that we’re Star Fleet material. Since the relaxation of military and flight time requirements, we’ve been looking forward to stepping off this chaos of hard clay to wander darkling in the eternal space. Romance aside, the effects of space travel on health and the unique physiological and psychological conditions of long-range space travel really do interest a growing number of research scientists, myself included. How exciting, then, to hear from my friend Rishi that CASIS (the Center for the Advancement of Science in Space) have issued a request for proposals looking into the effects of microgravity on stem cells. I’m a biologist; surely I work on stem cells. Well, not quite. I work on the immune system, the cells of which do indeed derive from stem cells, but my interests lie much further down the developmental line. I study how a fully mature immune system works to protect the body against infection and how vaccines use the same machinery to protect against diseases before we encounter them. To me, this has obvious applications for interplanetary space exploration.

 

An artist's impression of the author as an astronaut

 

A recent publication in the Journal of Clinical Immunology1 shows altered immune function following space flight. Levels of inflammation-driving molecules in the blood go up, but specific responses to viruses by T cells go down. Is this a Big Deal? Let’s assume that people aren’t allowed into space if they have a serious illness and that the vacuum of space is clean enough that disease-causing bugs are unlikely to get on board a spaceship and make everyone evolve backwards or suddenly challenge the entire crew to a duel. Does it matter if the immune system is depressed in space? Well, one of the observations highlighted in the paper is that virus-specific responses are reduced. This might not be a problem if we weren’t all riddled with a large number of viruses that are only kept in check by constant immune surveillance. Almost everyone is infected with JC virus, Cytomegalovirus, Epstein-Barr virus, Varicella-Zoster virus, and several other strains of common herpesviruses. Many diseases, including shingles, are associated with a drop in T cell responses and can be severely debilitating. Surely we need someone doing research up there to see how immune responses to these long-term resident viruses change. An outbreak of shingles during a long voyage may not make compelling television, but could severely compromise an astronaut’s ability to function when far from access to antiviral drugs and pain medication. Clearly (if perhaps tinged with a little bias), research into the immune system is much more directly applicable to survival in space than stem cell research, which is still very much in the early stages.

As Commander Chris Hadfield showed so wonderfully, there are many and varied experiments that can be conducted in space that will have an impact on future astronauts. Part of me is tempted to speculate as to why CASIS have limited their remit to stem cells. Is it just because they’re cool and in the news a lot these days? Have the people controlling the purse strings been sweet-talked by a lobbyist with stem cell leanings? Or was this really the result of a long and in-depth study section on the biological priorities of extraterrestrial research? The cynic in me is clamouring for a rant about the whims of science policy but, since I know so little about the process, I should probably save my energies for more productive tasks. Like thinking of ways to convince CASIS that I work on stem cells.

References

1)    Crucian et al (2013). Immune system dyregulation occurs during short duration spaceflight on board the space shuttle. Journal of Clinical Immunology 33 (2): 456-465.

 

Editor's Note: The author wishes me to express that she would prefer that CASIS used the correct spelling of 'Centre' in its name. Unfortunately, there was not time for the Center to make this official name change prior to this article going to press. Watch this space for updates.