Monday, March 21, 2011

A general guide to progression in the research PhD career path

I’m writing this in the hope that it can serve as a quick walkthrough on what to expect if you, or someone you know, is heading down the path of getting a doctorate in a research-heavy field.

Accompanying this progression through the ranks will almost inevitably be moves across the country or the world, which is no small thing. Also, remember that research PhDs (but rarely master's students) usually have their schooling paid for and are given a stipend throughout school.

The first step, after getting a bachelor’s degree, is graduate school. There are two general options for this, master’s and doctoral level.

Master’s: A lot of people in the sciences view a master’s degree as a stepping stone on the way to a PhD, but it is becoming more common for people to stop their education here and start making money, usually as an academic lab technician or in industry.

Doctoral: This is usually a PhD, but it might be other letters like DSci. Research PhDs can take anywhere from 3-7 or more years to complete, although every graduate school will tell you that their students average 5.5 years to completion, which is more or less the national average. There are, generally speaking, two phases of doctoral education: student and candidate.

A doctoral student is someone in the first two or three years of graduate study. At a certain point all students will take qualifying exams, which are also known as “quals,” “comps” or “prelims,” short for qualifying, comprehensive, and preliminary exams, respectively. These exams are usually very long and rigorous, with both a written and oral component which is tailored to individual student by a committee that also administers the exam. I read somewhere that the purpose of the exam is to make sure that the student’s knowledge fails at a sufficient level. In some programs, a master’s degree is awarded around the time of qualifying exams.

After all this, the student is now a candidate, which means that the student is now working on his or her dissertation research. This is usually very similar to the research that was being performed by the student for the previous two or three years but it also may have a bit of independence or break off in a new direction. At the end of graduate school, the student will have to have a signed dissertation, and it is also common to defend the dissertation to the dissertation committee in a format very similar to the oral part of quals.

Here’s where most people think of the “career” part starting. In reality, most students have been working contacts and lining up positions for a year or more before being awarded their PhD. Some people are even in faculty positions before they are awarded their doctoral degree, although that is becoming more and more rare.

For research PhDs, there are largely three options at this point: Postdoctoral research fellowship, industry, or faculty position. Postdocs are far and away the most common next-step, especially in biomedicine and life sciences, but there are also sometimes opportunities to start a faculty position right away, which is more common in psychology. Industry is another option, which usually pays a lot more. I don’t know much about industry careers, so I won’t discuss them any further.

Postdocs: These are usually 1-4 years per appointment and most people do 1-3 appointments before they land a faculty position. This is a very important part of a scientist’s career, as she is establishing herself as independent from her PhD advisor, and gaining new technical skills. Most people expect a new faculty member’s lab to continue the research from her last postdoc lab, rather than the lab where she did her PhD work.

Faculty: There are several different rungs of faculty spots. The general division is between tenure track and non-tenure track. You can usually tell the difference by the faculty member’s title. Non-tenured faculty are typically lecturers, adjunct or research professors. The tenure track progression is usually denoted by assistant professor (tenure track, but not tenured), then associate professor (tenured, half-way), and full professor which might also just be professor.

That’s it! There are a million caveats and things to add to this, but hopefully it’ll do as rough scaffolding.

Tuesday, August 11, 2009

A new test for hemispatial neglect

There’s an interesting article about a new neurological exam which found its genesis in an Eastchester High School classroom.

An AP Psychology teacher there was explaining hemispatial neglect, a disorder caused by brain damage in which the person is not consciously aware of either the left or right side of space or objects--including the patient’s own body.

During the discussion, one of the students asked the question, “What happens if you ask patients with neglect to clap their hands?” The teacher, Mr. Weisman, didn’t know. So he tracked down Dr. Rafael LlinĂ¡s, a researcher at Johns Hopkins University, which in turn led to the development of what has been named the Eastchester Clapping Sign (ECS).

Here is a picture of one patient, who showed an improvement between the first and second day of administering ECS. You can see that he's clapping with one hand at the midline, near where his other hand should meet it.



The article in Annals of Neurology outlines how to perform and score the test:

ECS testing was performed as follows: (1) make sure patient's arms and hands are both down at their sides; and (2) ask patient to clap his or her hands. Grading was established as follows: ECS-2 = one-handed clap, respects midline; ECS-1 = searches in the contralateral hemispace for the other hand; ECS-0 = reaches over to clap against the plegic hand; and UTA = unable to assess, that is, does not follow the command; you can try pantomime.

The authors point out that this simple test can be done in 30 seconds, helping doctors to quickly administer the correct medicines to aid in stroke recovery.

This just goes to show that sometimes the best ideas come from people who aren’t involved in research. Mr. Weisman should be commended for following up on his students’ question.

References: dx.doi.org/10.1002/ana.21666

Thursday, March 5, 2009

Precise Figures in Science

I was scanning the most recent issue of Brain Research Reviews when I came across an interesting figure.
Just take a moment and look at it:


If you're thinking "What in the world is that?" or "What is a Matisse painting doing in a neuroscience journal?" or perhaps some more colorful but equivalent phrase, congratulations. Me too.

The text accompanying the figure reads, "Receptor chunking. Isolated receptors statically scattered on the extracellular membrane (upper panel, adapted from H. Matisse, La Musique, 1910, oil on canvas), then cluster together into a “receptor chunk” after activation by the specific ligand ATP, and convey the feeling of emotional liberation into a function (lower panel, adapted from H. Matisse, La Dance, 1910, oil on canvas)." (emphasis mine)

What?

I later noticed an editorial in the same issue:
Free colour illustrations in the online version of articles
We are delighted to be able to inform you about an important development that is of benefit to the authors and readers of Brain Research.
Scientists in many disciplines need to be able to publish their work in colour, but the cost of printing figures in colour can be prohibitive and we realise that not all authors have sufficient funds to cover the charges.
Brain Research now offers the facility for authors to publish their illustrations in colour in the online version of the journal (in ScienceDirect: www.sciencedirect.com) at no cost - regardless of whether the illustrations appear in colour or black-and-white in the print journal.

I guess the authors got excited about this news and went a little overboard.
Figure and quotes taken from Volonté et al. Brain Research Reviews (59) November 2008, p1-8 dx.doi.org/10.1016/j.brainresrev.2008.04.004

Thursday, November 13, 2008

Leave the Brain Out of It

The November 13th issue of Nature has an opinion piece detailing some behavioral economics experiments, entitled ‘The Innovative Brain.’ It’s part of a essay series on innovation that I’ve found enjoyable so far.

The problem is, the research it talks about has nothing to do with the brain, other than all the subjects having one. This theme has been harped on in several other blogs I read, but it’s just annoying to me when such a big name journal puts out articles that throw in brain words to make things seem more interesting.

Although they claim to present ‘neurocognitive’ data, the piece is actually describes is a tendency in for entrepreneurs to be more prone to take a risk than managers in the Cambridge Gamble Task. Both groups, with mean ages slightly higher than 50 years old, differ in gambling behavior from their age matched controls. Managers gamble less money while entrepreneurs offer up more money. In a task that doesn’t assess risk taking, they are similar to their age group.

Adapted from B Sahakian et al.

I doubt many people will find this to be a shocking result. The article goes on to speculate that it may be possible to train young adults to be more innovative, or at least risk-tolerant, in entrepreneurship classes. Ok, maybe you can.

So what does this article have to do with the brain?

There are only a few parts of the article that mention it:

cold processes … are dependent upon the dorsolateral prefrontal cortex

then

hot processes are dependent on the medial and orbital sectors of the prefrontal cortex

and

These cognitive processes are intimately linked to brain neurochemistry, particularly to the neurotransmitter dopamine. Using single-dose psychostimulants to manipulate dopamine levels, we have seen modulation of risky decision-making on this task

There are a lot of things I could say about this. First of all, there is not a double dissociation between the areas of the prefrontal cortex that the authors mentioned and these tasks. I feel that a neuroscientist making such a claim is either being dishonest or, well, not really a neuroscientist.

Second, the ‘single-dose psychostimulant’ experiment the authors cite involves children diagnosed with AD/HD, not cognitively normal 50 year old businessmen. The stimulant they used was methylphenidate, commonly known as Ritalin, which is well known to exert opposite effects when administered to AD/HD children versus normal adults. So they gave it to AD/HD kids, and the kids bet less; adults could be expected to bet more money on Ritalin.

That is all, however, beside the point. An essay about the ‘Innovative Brain’ would talk specifically about how brain structures or nuclei are thought to contribute to innovation. It would discuss neuroscientific techniques like EEG, MRI, PET, or MEG in humans and perhaps more invasive procedures in animals. It would be much closer to this New Yorker article or even this Nature essay (requires subscription).

I’ll conclude with this: If you’re not really talking about neuroscience, leave the brain out of it, because you’re probably making gross generalizations and leaving out a lot of relevant information. The truth is, we don’t really know how any of these complex behaviors come about, it’s not honest to pretend we do, especially when talking to a lay audience.

References

B Sahakian et al. doi:10.1038/456168a

EE Devito et al. doi:10.1016/j.biopsych.2008.04.017

Monday, November 10, 2008

20th Anniversary Edition of Neuron

Neuron, a well respected journal, recently put out its 20th Anniversary Edition.

The good news is that the online edition is free to the public.

Normally a free edition of a neuroscience journal wouldn't mean much, but this issue has a lot of perspective pieces written by experts on the past, present, and future of their fields. These articles seem to be written for an intelligent lay audience, which is fantastic. It's always exciting to come across things like this, that explain the recent history of a scientific subfield in a way that is only normally done in a classroom or private discussion with a professor.

Take a look, there's some good stuff.

Wednesday, October 15, 2008

Stereotaxic Surgery


Last night I ran into an article on JoVE.com, the Journal of Visualized Experiments, which is a great example of the technique I'm using for my undergraduate thesis. There are some differences, but by and large it's a great window into my lab days.

Normally our mice are euthanized directly following surgery, but recently one of the grad students I work with had to do some lesions to induce sub-threshold Parkinson's disease. This made it necessary to take the precautions against infection and include some of the post-op recovery steps that are shown in the video.

That stage of her project was actually easier than what you'll see on JoVE. The crew at Tufts is implanting small screws to the skull to secure a permanent microdialysis probe, while all we needed to do was temporarily insert a cannula to inject a microliter of 6-Hydroxydopamine solution.

Anyway, check out the video, and the rest of JoVE for that matter.

One caveat, this video is not for the squeamish. It shows surgery on live animals.

Survivable Stereotaxic Surgery in Rodents

Sunday, September 14, 2008

From Spikes to Decisions: Part 2

Spatial Awareness

Space is an interesting concept, one we often don’t think about. It’s something that has to be calculated to do something as simple as walk across the room, or as complicated as walking across a city. It’s an abstract thing, and by that I mean that we have no sensory receptors for space; it’s something that has to be generated as an idea in our heads. Psychologists call what’s generated an internal map.

In 1971, the neuroscientists John O’Keefe and Jonathan Dostrovsky found nerve cells that fire differently according to where an animal is in an enclosure. This was particularly significant because the rat’s position in the maze could be measured, which allowed the researchers to correlate specific cell firing with specific locations. They dubbed these neurons, found throughout the hippocampus, place cells.


Resorting to a graphical representation is probably the best way to illustrate place cell function. I’ve borrowed a handy flash animation from University of Bristol’s spatial memory page to help with that (doesn't work in an RSS reader):





As the animal walks around the enclosure and checks out its environment, certain cells will fire at certain times, and by charting this firing relative to the animal’s position, the researcher can work out the place field of the neuron. Once a rat has learned a space, it can return weeks later and place cells will fire in a very similar way to the first time it walked around the space. Yet if the researcher moves the distal (far away) sensory cues—by rotating colored shapes that mark certain walls while the rat is out of the enclosure or distracted—the firing of these cells will change in proportion to the amount of rotation.

The above examples show that a rat makes the primary judgment of where it is via distal sensory cues and that place cell firing is directly related to this decision. Itzhak Fried, a UCLA neurosurgeon, and his collaborators have demonstrated the same principles in human epileptic patients.

We’re visual creatures, and for most of us it may be hard to separate ideas of space from our visual world, but internal maps and place cell firing aren’t wholly dependent on sight. Bats, whales, and blind people all navigate through space without visual cues. It may come as no surprise then, that researchers have shown that distal visual cues aren’t the only thing controlling the firing of place cells in rats. The input is polysensory, integrating all of the five senses, motor output to the limbs, the vestibular system (loopy things in the ear) and, of course, memory.

An illustration of this polysensory input is an experiment where the walls were quickly dropped as a rat ran down a long hallway. The walls and the symbols that researchers placed on them were all the rat could see before, and it would have a completely different environment after dropping the walls. If distal cues were all that determined the rat’s place cell firing, then they would have seen a change, but the cells continued to fire as they did before the walls were dropped. In this instance, the rat wasn't fooled because the act of running continuously was enough to let the rat know it didn't go anywhere new, and the cells fired the same way as a result.

To sum up this section, when place cell firing is overlaid on both time and location of a rat’s movements, there is the ability to begin deciphering the neural code of internal maps, and then move both down and upstream of the hippocampus to start figuring out the inputs. By doing this, scientists have found the major determinant input of place cell firing, grid cells.

Reward Prediction

Dopamine was long thought to be the chemical responsible for pleasure, but it’s now thought of as responsible for “wanting” or motivational urges. It’s also involved in other things, like movement and suppression of the hormone prolactin, but I’ll only refer to dopamine in its reward related role.

By recording the action potentials (spikes or firing) of dopaminergic cells of the substantia nigra and ventral tegmental area, two inter-related areas in the midbrain, scientists have been able to find what’s known as the dopamine reward error prediction signal. All the figures I’ll use below come from Wolfrum Schultz’s scholarpedia page on reward signals.

The least ambiguous way I can think of to illustrate this principle is to have the reader imagine a monkey sitting in front of a screen, with a little tube at his mouth for administering drops of juice. On the screen, different pictures are being flashed and after certain ones he’ll receive a drop of juice. Researchers call the pictures that are paired with juice a conditioned stimulus (CS) because there’s nothing inherent in a random picture that tells the monkey a reward is coming; he has to learn the relationship.

In the figure below, you can see how these cells respond to a reward that the monkey didn’t know was coming:

In this figure and the ones that follow, the individual cells being recorded are shown in rows from top to bottom. The spikes of these cells are shown left to right as they occur over about 3 seconds. At the top they’re all summed, so it’s easier to see when a lot of them fire together.

Because the monkey doesn’t know that the picture means a reward is coming, the cells fire a lot in response to the surprise juice. This is pretty much the pattern they follow, until the monkey has learned which pictures come before a drop of juice. Once the monkey learns, the dopamine cells fire like this:

Two things are very clear here. The first is that the cells all fire at essentially the same moment in response to the conditioned stimulus. The second is that the reward elicits no change in the normal firing of the cells. From this figure, the idea that dopamine doesn’t correlate with the pleasure of a reward should make sense.

Dopaminergic firing will continue in this pattern if the experimental conditions stay the same, and a reward always follows a conditioned stimulus. The experiment can be modified, however, and a stimulus that used to signal a reward will no longer do so.

When this happens, the monkey’s dopaminergic cells show an interesting firing pattern:

The response to the CS is the same, but there’s a period where the neurons are completely shut down because there was no reward when it was expected.

All three of these responses are distinctly correlated with stimuli in the world, in the same way that place cells fire in specific locations.

These two wonders of the brain, place cells and the dopamine reward signal, represent the neurological correlates of mental percepts for which there are no sensory organs. Researchers have spent several decades refining knowledge of both so that we now know just how to find the cell activity and, for the most part, what it means.

Can we take these two signals and look at them simultaneously to discover anything useful?

This is the question that a recent paper by Adam Johnson and David Redish asks. It also looks at how you would frame and experiment to find out. I think it's a very exciting proposal, and I'll go over it in part 3 of this series, along with some of my own thoughts on the matter.

For more information on either of the topics I talked about above, I recommend checking out University of Bristol's spatial memory page, and Wolfrum Schultz's scholarpedia article on reward signals, which are both linked below.

References:

University of Bristol’s neuroscience website: http://www.bristol.ac.uk/synaptic/research/projects/memory/spatialmem.htm

The Mosers' annual review article on Place Cells, Grid Cells, and the Brain’s Spatial Representation System: dx.doi.org/10.1146/annurev.neuro.31.061307.090723

Schultz’s scholarpedia page:

http://www.scholarpedia.org/article/Reward_signals

Schultz’s annual review article on reward: dx.doi.org/10.1146/annurev.psych.56.091103.070229