Sunday, December 28, 2008

Serotonin! What Is It Good For?

Absolutely nothing...? Not quite, but it may be good for a lot less than anyone thought. At least according to a recent paper in PLoS One describing what happens to mice given genetic knockout which left them almost completely unable to produce the neurotransmitter serotonin (5HT).

The mice lacked either one, or both, of two genes called TPH1 and TPH2, which code for two related enzymes called tryptophan hydroxylase-1 and tryptophan hydroxylase-2. These are necessary for the production of serotonin from the amino acid tryptophan (which you get from eating turkey... and also most other foods). No tryptophan hydroxylase, no serotonin.

Tryptophan hydroxylase-1 is mostly responsible for making serotonin outside the brain, while tryptophan hydroxylase-2 predominates in neurones. So the mice lacking both enzymes ("double knockouts") should have had no serotonin at all, anywhere. In fact, chemical analysis revealed a small amount present in the brains, but it was >99% less than normal, and even this may have been some kind of contaminant rather than serotonin:
Reduction of 5-HT in TPH2KO mice ranged from 67.5% (cerebellum) to 96.9% (striatum), while 5-HT reduction in DKO mice ["double knockouts" who lacked both TPH1 and TPH2] ranged from 94.4% (cerebellum) to 99.2% (cortex). 5-HT levels were lower in DKO mice than in TPH2KO mice in all brain regions examined. The percentage of 5-HIAA reduction paralleled changes in 5-HT. No generalized changes were noted in other neurotransmitter levels.
So, what happened to these serotonin-less animals? The big story is - remarkably little. They were alive, for one thing. They weren't writhing in pain thinking "Every moment I live is agony!" like that mutant on The Simpsons. The double knockout mice were slightly smaller and leaner than usual (less body fat), but only by a few % points. Otherwise, they were normal on almost every measure. This is very surprising, given that serotonin is one of the oldest neurotransmitters in evolutionary terms. Even insects use serotonin as a transmitter. Even some single-celled organisms have serotonin. There are at least 14 different types of serotonin receptor in the mouse body (same for humans). What are they all doing? Nothing especially important, clearly.
The results dramatically indicate that 5-HT is not essential for overall development and that its role in behavior is modulatory rather than essential. Initial phenotypic analysis of these mutants revealed no differences in a range of measures of physical health including assays for cardiac, immune system, endocrine, and ophthalmic function (unpublished observations).
However, that's not the end of the story. The mice were also tested in a battery of standard behavioural tests used to measure anxiety levels and such like; these are commonly used to measure the effects of antidepressants and other such drugs in rodents. Given that antidepressants such as Prozac are supposed to work by increasing serotonin levels in the brain, you'd expect that mice with no serotonin would be "depressed".

The TPH1 knockout animals showed no differences at all - no surprise since, as you'll recall, they only lacked serotonin outside the brain e.g. in the intestines, where it seems to play a role in digestion - although presumably not a vital one. So, no surprise there. The TPH2 knockouts, and the TPH1/TPH2 double knockouts were remarkably normal too, showing no differences on most of the behavioural tests
For the TPH2KO and DKO, there were no differences between the KO or DKO and WT littermate control mice in motor coordination, acoustic startle response and sensorimotor gating, tonic inflammatory pain sensitivity, and learning and memory as assessed in inverted screen, pre-pulse inhibition, formalin paw, and trace fear conditioning assays, respectively
But they did show differences in the marble burying test, the forced swim test, and the tail suspension test. The double-knockouts generally showed the most profound effects. But here's the twist - far from being "depressed", the knockout mice were less "depressed" on the forced swim test (i.e. the genetic knockout had the same effect to that seen with antidepressants.) That is, they showed more struggling and less immobility. This is the exact opposite of what you might have expected.

On the other hand, the knockouts showed increased immobility on the tail suspension test, which is generally taken to be a depressive behaviour, and they buried more marbles in the marble burying test, which is opposite to the effects of Prozac. It's not clear what if anything burying more marbles means; some have suggested that the frantically burying mice are showing OCD-like symptoms. Hmm.

So, what these results show is that a) mice can live almost normal lives without serotonin, or at best with trace amounts, and b) the main effects of having no serotonin are upon "depression-like" behaviours, but whether the knockouts are more or less depressed is unclear (the authors push the idea that they're more depressed, but really it's impossible to say.) Still, this is a bit more evidence that the serotonin hypothesis of depression isn't quite dead.

To my mind, though, the most interesting result by far is that serotonin is so dispensible. Mice can live essentially normal lives without it, which is not true for most other neurotransmitters. Bear in mind, though, that just because serotonin is not necessary for normal functioning doesn't mean that if you do have serotonin, it isn't doing anything. It might be that in the knockout mice, other systems had taken over the roles normally played by serotonin.

Finally, this study was run by Lexicon Pharmaceuticals, who use genetic knockout technology to discover new drugs. They end by saying...
Our results strongly support targeting the 5-HT system to treat affective disorders and the use of knockout mice as a tool to tease apart mechanisms involved in the etiology of these disorders.
Take that as you will.

ResearchBlogging.orgKaterina V. Savelieva, Shulei Zhao, Vladimir M. Pogorelov, Indrani Rajan, Qi Yang, Emily Cullinan, Thomas H. Lanthorn (2008). Genetic Disruption of Both Tryptophan Hydroxylase Genes Dramatically Reduces Serotonin and Affects Behavior in Models Sensitive to Antidepressants PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003301

Serotonin! What Is It Good For?

Absolutely nothing...? Not quite, but it may be good for a lot less than anyone thought. At least according to a recent paper in PLoS One describing what happens to mice given genetic knockout which left them almost completely unable to produce the neurotransmitter serotonin (5HT).

The mice lacked either one, or both, of two genes called TPH1 and TPH2, which code for two related enzymes called tryptophan hydroxylase-1 and tryptophan hydroxylase-2. These are necessary for the production of serotonin from the amino acid tryptophan (which you get from eating turkey... and also most other foods). No tryptophan hydroxylase, no serotonin.

Tryptophan hydroxylase-1 is mostly responsible for making serotonin outside the brain, while tryptophan hydroxylase-2 predominates in neurones. So the mice lacking both enzymes ("double knockouts") should have had no serotonin at all, anywhere. In fact, chemical analysis revealed a small amount present in the brains, but it was >99% less than normal, and even this may have been some kind of contaminant rather than serotonin:
Reduction of 5-HT in TPH2KO mice ranged from 67.5% (cerebellum) to 96.9% (striatum), while 5-HT reduction in DKO mice ["double knockouts" who lacked both TPH1 and TPH2] ranged from 94.4% (cerebellum) to 99.2% (cortex). 5-HT levels were lower in DKO mice than in TPH2KO mice in all brain regions examined. The percentage of 5-HIAA reduction paralleled changes in 5-HT. No generalized changes were noted in other neurotransmitter levels.
So, what happened to these serotonin-less animals? The big story is - remarkably little. They were alive, for one thing. They weren't writhing in pain thinking "Every moment I live is agony!" like that mutant on The Simpsons. The double knockout mice were slightly smaller and leaner than usual (less body fat), but only by a few % points. Otherwise, they were normal on almost every measure. This is very surprising, given that serotonin is one of the oldest neurotransmitters in evolutionary terms. Even insects use serotonin as a transmitter. Even some single-celled organisms have serotonin. There are at least 14 different types of serotonin receptor in the mouse body (same for humans). What are they all doing? Nothing especially important, clearly.
The results dramatically indicate that 5-HT is not essential for overall development and that its role in behavior is modulatory rather than essential. Initial phenotypic analysis of these mutants revealed no differences in a range of measures of physical health including assays for cardiac, immune system, endocrine, and ophthalmic function (unpublished observations).
However, that's not the end of the story. The mice were also tested in a battery of standard behavioural tests used to measure anxiety levels and such like; these are commonly used to measure the effects of antidepressants and other such drugs in rodents. Given that antidepressants such as Prozac are supposed to work by increasing serotonin levels in the brain, you'd expect that mice with no serotonin would be "depressed".

The TPH1 knockout animals showed no differences at all - no surprise since, as you'll recall, they only lacked serotonin outside the brain e.g. in the intestines, where it seems to play a role in digestion - although presumably not a vital one. So, no surprise there. The TPH2 knockouts, and the TPH1/TPH2 double knockouts were remarkably normal too, showing no differences on most of the behavioural tests
For the TPH2KO and DKO, there were no differences between the KO or DKO and WT littermate control mice in motor coordination, acoustic startle response and sensorimotor gating, tonic inflammatory pain sensitivity, and learning and memory as assessed in inverted screen, pre-pulse inhibition, formalin paw, and trace fear conditioning assays, respectively
But they did show differences in the marble burying test, the forced swim test, and the tail suspension test. The double-knockouts generally showed the most profound effects. But here's the twist - far from being "depressed", the knockout mice were less "depressed" on the forced swim test (i.e. the genetic knockout had the same effect to that seen with antidepressants.) That is, they showed more struggling and less immobility. This is the exact opposite of what you might have expected.

On the other hand, the knockouts showed increased immobility on the tail suspension test, which is generally taken to be a depressive behaviour, and they buried more marbles in the marble burying test, which is opposite to the effects of Prozac. It's not clear what if anything burying more marbles means; some have suggested that the frantically burying mice are showing OCD-like symptoms. Hmm.

So, what these results show is that a) mice can live almost normal lives without serotonin, or at best with trace amounts, and b) the main effects of having no serotonin are upon "depression-like" behaviours, but whether the knockouts are more or less depressed is unclear (the authors push the idea that they're more depressed, but really it's impossible to say.) Still, this is a bit more evidence that the serotonin hypothesis of depression isn't quite dead.

To my mind, though, the most interesting result by far is that serotonin is so dispensible. Mice can live essentially normal lives without it, which is not true for most other neurotransmitters. Bear in mind, though, that just because serotonin is not necessary for normal functioning doesn't mean that if you do have serotonin, it isn't doing anything. It might be that in the knockout mice, other systems had taken over the roles normally played by serotonin.

Finally, this study was run by Lexicon Pharmaceuticals, who use genetic knockout technology to discover new drugs. They end by saying...
Our results strongly support targeting the 5-HT system to treat affective disorders and the use of knockout mice as a tool to tease apart mechanisms involved in the etiology of these disorders.
Take that as you will.

ResearchBlogging.orgKaterina V. Savelieva, Shulei Zhao, Vladimir M. Pogorelov, Indrani Rajan, Qi Yang, Emily Cullinan, Thomas H. Lanthorn (2008). Genetic Disruption of Both Tryptophan Hydroxylase Genes Dramatically Reduces Serotonin and Affects Behavior in Models Sensitive to Antidepressants PLoS ONE, 3 (10) DOI: 10.1371/journal.pone.0003301

Friday, December 26, 2008

Seven Things You Didn't Know About Milgram

There's been a lot written about psychology professor Jerry Burger's recent replication of the famous "obedience" experiments first carried out by Stanley Milgram in the early 1960s. Here's Burger's paper in which he reports that obedience rates are almost the same today as they were nearly 50 years ago.

Wikipedia's page on this experiment has an excellent summary of the methodology and results of the original study if you're not familiar with it.

It's a testament to the importance of the original obedience experiment that many who know nothing else about psychology have at least heard of it, and it's common knowledge that Milgram found that a startlingly high proportion of ordinary volunteers were willing to administer very strong "shocks" to an innocent victim, on the orders of the experimenter. But there's much more to the "Milgram Experiment" than many people realize. So - read on. That's an order.
  1. There wasn't just one experiment In 1974, Milgram discussed the results and implications of his research in a book, Obedience to Authority: An Experimental View. (The cover is rather amusing). In it he describes no fewer than 19 different experiments, not including pilot studies. Most of the studies included 40 participants, although some of the later ones used 20. The basic nature of the experimental situation was the same in each case, but important factors were varied between expriments, offering some insight into the conditions which drive obedience (see below). All of this work was performed at or near Yale between 1960 and 1963. Milgram also refers to later replication studies carried out in"Princeton, Munich, Rome, South Africa and Australia" where "the level of obedience was invariably somewhat higher than that found [in the Yale studies]". So, whatever was going on in the Milgram experiments, it wasn't unique to the USA, and the fact that Jerry Burger has just obtained very similar results shows that it wasn't unique to the 1960s either (although, to look at it the other way, the USA today is not especially conformist.)
  2. Subjects were paid $4 each Milgram's book is full of details such as this, including plenty of photos and drawings illustrating what happened. The picture here shows the designated "victim" in most of the experiments - James McDonough, "a 47-year old accountant, trained for the role; he was of Irish-American descent and most observers found him mild-mannered and affable". This is the face that launched a thousand shocks - seeing it, for me, brought home the results of the obedience studies very starkly. How could anyone shock that guy? Another important detail is that rather than recruiting undergraduate students, as most psychology experiments do, Milgram placed adverts in local newspapers and, when that only got a few hundred volunteers, resorted to cold-calling names in the New Haven telephone directory. This meant that the participants were (as far as possible) representative of the normal population - a crucial point.
  3. Milgram was an Evolutionary Psychologist Well, sort of. He was into Evolutionary Psychology before it became a buzzphrase - indeed, before the term had been coined. In his book, Milgram notes that "the formation of hierarchically organised groupings lends enormous advantage to those so organized in coping with dangers of the physical environment, threats posed by competing species, and potential disruption from within." In other words, an animal which has the ability to submit to authority when necessary might be more likely to survive than one which was stubbornly individualistic. He goes on to theorize that humans have evolved a psychological mechanism for obedience, which he calls the "Agentic State", a special state of mind in which our normal moral inhibitions are bypassed and we become an agent of an authority. I'm not sure many people would buy this as a good explanation, and it isn't clear if Milgram's evolutionary logic relies on Group Selection theory, but it's certainly interesting.
  4. It was stressful Most of the subjects were acutely distressed during the procedure - hardly surprising given the screams and protests of their "victim". Some subjects shook with tension; one started laughing whenever they had to give a shock. Yet most of them continued to give the shocks despite being tangibly upset about it. They didn't want to hurt the "victim" - but they did. This inner conflict suffered by the subjects comes across vividly in Milgram's writing, and it led to some fascinating behaviour. In Experiment 7, in which the "experimenter" giving orders left the room and spoke to the subjects by telephone, many subjects continued to give shocks but gave much milder shocks than they were supposed to. In other words, they were unwilling to hurt the victim but also unwilling to openly disobey (although in this case, 80% of subjects eventually did). Most people also seemed to try to keep the shocks as short as possible, and tried to minimize the number of punishments by helping the victim to give the right answers. Milgram argued that this ruled out the view that his experiment showed people to be "aggressive" or "sadistic" - rather, people were naturally averse to causing harm, but the situation they found themselves in led them to do so anyway. As he put it "The social psychology of this century reveals a major lesson: often it is not so much the kind of person a man as the kind of situation in which he finds himself that determines how he will act."
  5. There was follow-up Milgram's sometimes accused of being a cavalier or even callous researcher who exposed his volunteers to emotional harm. In fact although, as the cliche goes, Milgram's studies would never pass an ethics committee today, he seems (at least on his own account) to have gone to great effort to ensure that his participants were not traumatized and to record how they felt about the experiment. Immediately after the experiment was finished the subjects were "debriefed" and told what had really happened; if they had been obedient, they were reassured that this was normal behaviour (true, of course). Then, a few weeks later, they were sent a write-up of the results of the research and an explanation of the rationale. A questionairre asked how they felt about the study overall; 43% said they were very glad to have done it, 40% said they were glad, and just 1.3% were sorry or very sorry to have done it; there was little difference between those who obeyed and those who didn't. Commenting on the fact that people seemed remarkably relaxed about what they had done, in retrospect, Milgram wryly noted "The same mechanisms that allow the subject to perform the act...continue to justify his behaviour for him".
  6. Not everyone obeyed You probably already know this, but you think of it as less exciting than the fact that most people did. In the best known version of the experiment (Experiment 5), 35% of people refused to administer the highest shock level, and some of those came close to it. In other experimental set-ups, obedience rates were different - when the study was carried out in a run-down city apartment, rather than in the presitgous surroundings of Yale, obedience rates dropped (but were still 47.5%). When the subjects did not have to administer the shocks themselves but simply sit by and take notes while someone else did, almost everyone complied (92.5%). Yet there were no clear explanations for why some individuals obeyed and some did not. Some people were chillingly obedient, others were boldly defiant, but it's not clear why. Age, religion (Catholic vs. Protestant), and political affiliation did not seem to matter. Most of the studies used male volunteers only, for some reason, but Experiment 8 used women; compared to Experiment 5 the results were pretty much identical. In the early experiments there were some indications that better educated and higher-status men were more defiant, but this did not seem to hold for all of the studies.
  7. This actually happened Again, you already knew this, but it's worth taking a moment to remember it. This really happened and it's been replicated ad nauseum; so far as I can see, no-one has succesfully criticized the basic assumptions of the paradigm (although if anyone has please let me know.) Milgram's faith in humanity seems to have been shaken by his research - his book contains case studies of individual participants which are are cynical to the point of misanthropy, even down to the level of the physical appearance and personality of the participants ("Mr Batta is a 37-year old welder...he has a rough-hewn face that conveys a conspicuous lack of alertness. His overall appearance is somewhat brutish...[during the experiment] what is remarkable is his total indifference to the learner; he hardly takes cognizance of him as a human being...the scene is brutal and depressing...at the end of the session he tells the experimenter how honored he has been to help him.") The subjects who disobeyed authority get a slightly better treatment, but not much better. Yet who can blame Milgram for this? It's worth bearing in mind also that Milgram was Jewish. His text is full of references to Nazi Germany, Hannah Arendt, the Vietnam War and the Mai Lai massacre. The hero of the book, if there is one, seems to be the young man who took part in the experiment and, as a result, decided to apply for Conscientous Objector status to avoid being sent to Vietnam. He got it.
Links: Dr Thomas Blass's StanleyMilgram.com - excellent.
Dr Blass's review paper on the Milgram paradigm.

Seven Things You Didn't Know About Milgram

There's been a lot written about psychology professor Jerry Burger's recent replication of the famous "obedience" experiments first carried out by Stanley Milgram in the early 1960s. Here's Burger's paper in which he reports that obedience rates are almost the same today as they were nearly 50 years ago.

Wikipedia's page on this experiment has an excellent summary of the methodology and results of the original study if you're not familiar with it.

It's a testament to the importance of the original obedience experiment that many who know nothing else about psychology have at least heard of it, and it's common knowledge that Milgram found that a startlingly high proportion of ordinary volunteers were willing to administer very strong "shocks" to an innocent victim, on the orders of the experimenter. But there's much more to the "Milgram Experiment" than many people realize. So - read on. That's an order.
  1. There wasn't just one experiment In 1974, Milgram discussed the results and implications of his research in a book, Obedience to Authority: An Experimental View. (The cover is rather amusing). In it he describes no fewer than 19 different experiments, not including pilot studies. Most of the studies included 40 participants, although some of the later ones used 20. The basic nature of the experimental situation was the same in each case, but important factors were varied between expriments, offering some insight into the conditions which drive obedience (see below). All of this work was performed at or near Yale between 1960 and 1963. Milgram also refers to later replication studies carried out in"Princeton, Munich, Rome, South Africa and Australia" where "the level of obedience was invariably somewhat higher than that found [in the Yale studies]". So, whatever was going on in the Milgram experiments, it wasn't unique to the USA, and the fact that Jerry Burger has just obtained very similar results shows that it wasn't unique to the 1960s either (although, to look at it the other way, the USA today is not especially conformist.)
  2. Subjects were paid $4 each Milgram's book is full of details such as this, including plenty of photos and drawings illustrating what happened. The picture here shows the designated "victim" in most of the experiments - James McDonough, "a 47-year old accountant, trained for the role; he was of Irish-American descent and most observers found him mild-mannered and affable". This is the face that launched a thousand shocks - seeing it, for me, brought home the results of the obedience studies very starkly. How could anyone shock that guy? Another important detail is that rather than recruiting undergraduate students, as most psychology experiments do, Milgram placed adverts in local newspapers and, when that only got a few hundred volunteers, resorted to cold-calling names in the New Haven telephone directory. This meant that the participants were (as far as possible) representative of the normal population - a crucial point.
  3. Milgram was an Evolutionary Psychologist Well, sort of. He was into Evolutionary Psychology before it became a buzzphrase - indeed, before the term had been coined. In his book, Milgram notes that "the formation of hierarchically organised groupings lends enormous advantage to those so organized in coping with dangers of the physical environment, threats posed by competing species, and potential disruption from within." In other words, an animal which has the ability to submit to authority when necessary might be more likely to survive than one which was stubbornly individualistic. He goes on to theorize that humans have evolved a psychological mechanism for obedience, which he calls the "Agentic State", a special state of mind in which our normal moral inhibitions are bypassed and we become an agent of an authority. I'm not sure many people would buy this as a good explanation, and it isn't clear if Milgram's evolutionary logic relies on Group Selection theory, but it's certainly interesting.
  4. It was stressful Most of the subjects were acutely distressed during the procedure - hardly surprising given the screams and protests of their "victim". Some subjects shook with tension; one started laughing whenever they had to give a shock. Yet most of them continued to give the shocks despite being tangibly upset about it. They didn't want to hurt the "victim" - but they did. This inner conflict suffered by the subjects comes across vividly in Milgram's writing, and it led to some fascinating behaviour. In Experiment 7, in which the "experimenter" giving orders left the room and spoke to the subjects by telephone, many subjects continued to give shocks but gave much milder shocks than they were supposed to. In other words, they were unwilling to hurt the victim but also unwilling to openly disobey (although in this case, 80% of subjects eventually did). Most people also seemed to try to keep the shocks as short as possible, and tried to minimize the number of punishments by helping the victim to give the right answers. Milgram argued that this ruled out the view that his experiment showed people to be "aggressive" or "sadistic" - rather, people were naturally averse to causing harm, but the situation they found themselves in led them to do so anyway. As he put it "The social psychology of this century reveals a major lesson: often it is not so much the kind of person a man as the kind of situation in which he finds himself that determines how he will act."
  5. There was follow-up Milgram's sometimes accused of being a cavalier or even callous researcher who exposed his volunteers to emotional harm. In fact although, as the cliche goes, Milgram's studies would never pass an ethics committee today, he seems (at least on his own account) to have gone to great effort to ensure that his participants were not traumatized and to record how they felt about the experiment. Immediately after the experiment was finished the subjects were "debriefed" and told what had really happened; if they had been obedient, they were reassured that this was normal behaviour (true, of course). Then, a few weeks later, they were sent a write-up of the results of the research and an explanation of the rationale. A questionairre asked how they felt about the study overall; 43% said they were very glad to have done it, 40% said they were glad, and just 1.3% were sorry or very sorry to have done it; there was little difference between those who obeyed and those who didn't. Commenting on the fact that people seemed remarkably relaxed about what they had done, in retrospect, Milgram wryly noted "The same mechanisms that allow the subject to perform the act...continue to justify his behaviour for him".
  6. Not everyone obeyed You probably already know this, but you think of it as less exciting than the fact that most people did. In the best known version of the experiment (Experiment 5), 35% of people refused to administer the highest shock level, and some of those came close to it. In other experimental set-ups, obedience rates were different - when the study was carried out in a run-down city apartment, rather than in the presitgous surroundings of Yale, obedience rates dropped (but were still 47.5%). When the subjects did not have to administer the shocks themselves but simply sit by and take notes while someone else did, almost everyone complied (92.5%). Yet there were no clear explanations for why some individuals obeyed and some did not. Some people were chillingly obedient, others were boldly defiant, but it's not clear why. Age, religion (Catholic vs. Protestant), and political affiliation did not seem to matter. Most of the studies used male volunteers only, for some reason, but Experiment 8 used women; compared to Experiment 5 the results were pretty much identical. In the early experiments there were some indications that better educated and higher-status men were more defiant, but this did not seem to hold for all of the studies.
  7. This actually happened Again, you already knew this, but it's worth taking a moment to remember it. This really happened and it's been replicated ad nauseum; so far as I can see, no-one has succesfully criticized the basic assumptions of the paradigm (although if anyone has please let me know.) Milgram's faith in humanity seems to have been shaken by his research - his book contains case studies of individual participants which are are cynical to the point of misanthropy, even down to the level of the physical appearance and personality of the participants ("Mr Batta is a 37-year old welder...he has a rough-hewn face that conveys a conspicuous lack of alertness. His overall appearance is somewhat brutish...[during the experiment] what is remarkable is his total indifference to the learner; he hardly takes cognizance of him as a human being...the scene is brutal and depressing...at the end of the session he tells the experimenter how honored he has been to help him.") The subjects who disobeyed authority get a slightly better treatment, but not much better. Yet who can blame Milgram for this? It's worth bearing in mind also that Milgram was Jewish. His text is full of references to Nazi Germany, Hannah Arendt, the Vietnam War and the Mai Lai massacre. The hero of the book, if there is one, seems to be the young man who took part in the experiment and, as a result, decided to apply for Conscientous Objector status to avoid being sent to Vietnam. He got it.
Links: Dr Thomas Blass's StanleyMilgram.com - excellent.
Dr Blass's review paper on the Milgram paradigm.

Wednesday, December 24, 2008

Encephalon #61 is up

The 61st edition of neuroscience/psychology-based blog carnival, Encephalon, is up at Sharpbrains. I'm in it, twice, but don't let that stop you - the rest of it is pretty good...

Encephalon #61 is up

The 61st edition of neuroscience/psychology-based blog carnival, Encephalon, is up at Sharpbrains. I'm in it, twice, but don't let that stop you - the rest of it is pretty good...

Monday, December 22, 2008

John F. Kennedy, speed freak?

In his book In Sickness and In Power, the former British politician and doctor David Owen (sorry - Lord Owen) discusses the physical and mental health of various 20th century leaders.(*) The chapter on John F. Kennedy is extremely interesting. The most popular President of the century was both seriously ill and a big drug user.

Although he denied it at the time, to the point of lying, it's now known that Kennedy suffered from Addison's disease, a serious chronic condition leading to a lack of the steroid hormone cortisol, and in his case, also of thyroid hormone. As a result he required daily hormone treatments of cortisone , tri-iodothyronine and testosterone to stay alive. Kennedy also suffered from several other health problems such as chronic back pain following a World War 2 injury (his boat was rammed by a Japanese submarine and sank), and came close to death at least twice.

This is quite interesting in itself, but especially so since both cortisone and testosterone can alter mood and behaviour. In high doses, cortisone can produce mood swings, agitation and mania, and with prolonged use, depression; while testosterone... well, it's testosterone. In theory, Kennedy only needed to take enough of these hormones to achieve normal levels, but in fact, Owen says, for long periods of his Presidency he was taking much more than that, partly because doctors in the 1960s tended to use higher doses than would now be considered wise, and partly because he was being simultanously treated by a number of doctors who didn't always know what the others were doing (seriously.) Allegedly, some photos of Kennedy show symptoms of excessive cortisol levels ("Cushingoid features") such as a puffy face, although I haven't checked this. (This picture shows signs of Addison's disease - low weight and dark skin - before he was treated).

Most interestingly for drug fans, Owen says that Kennedy was a regular user of amphetamine ("speed"), which he was given by Dr Max "Dr Feelgood" Jacobson, who was essentially a high-class quack, although a very popular one. Jacobson was a methamphetamine user himself and he was eventually banned from practicing medicine in 1975. Jacobson gave Kennedy injections of amphetamine and steroids, and probabky also gave him vials of drugs to inject himself with; on at least one occasion he probably gave him methamphetamine. All of this was perfectly legal, but it was medically unnecessary, and maybe downright dangerous. Kennedy also had injections of Demerol (pethidine) for chronic back pain, a powerful painkiller which pharmacologically is rather like a cross between morphine and cocaine. Fun stuff. Jacobson, however, disappoved of this.

Owen speculates that Kennedy's medication, as well as his general health, contributed to his erratic performance during the first half of his presidency - hence the Bay of Pigs fiasco, and an embarrasingly poor form during a summit with the Soviet leader Khrushchev. Later on, when Kennedy's health situation had improved and he had cut back on the speed and steroids, he was able to handle the Cuban Missile Crisis very effectively and, probably, saved the world. A skeptic would say that Kennedy might have just learned from his mistakes of course, but Owen's theory is certainly possible, and it's worth bearing in mind when thinking about the possibility of the widespread use of "cognitive enhancers" - most of these drugs are stimulants with effects on mood and judgement. So remember - unless you want to preside over an abortive, ill-planned invasion of a small third-world country, keep away from speed.

(*)Mini-book-review: In Sickness and in Power is interesting but badly flawed; it mixes sound history with fluffy speculation seemingly at random. Would French President Mitterand have acted differently on the War in Yugoslavia if he hadn't had cancer? Would the Shah of Iran have been forced to step down earlier if he had admitted to having leukemia? Possibly - but we really don't know. Owen spends a lot of time wondering about such hypothetical questions. He has also invented a new psychiatric diagnosis, "hubris syndrome", complete with a DSM-IV style symptom checklist, with which he proceeds to diagnose people like Tony Blair and George Bush on the basis that they made bad decisions about Iraq. Fair enough, but I've have preferred to hear more about Nixon and Bush's alcoholism or about Winston Churchill's depression, which are discussed, but only briefly.

John F. Kennedy, speed freak?

In his book In Sickness and In Power, the former British politician and doctor David Owen (sorry - Lord Owen) discusses the physical and mental health of various 20th century leaders.(*) The chapter on John F. Kennedy is extremely interesting. The most popular President of the century was both seriously ill and a big drug user.

Although he denied it at the time, to the point of lying, it's now known that Kennedy suffered from Addison's disease, a serious chronic condition leading to a lack of the steroid hormone cortisol, and in his case, also of thyroid hormone. As a result he required daily hormone treatments of cortisone , tri-iodothyronine and testosterone to stay alive. Kennedy also suffered from several other health problems such as chronic back pain following a World War 2 injury (his boat was rammed by a Japanese submarine and sank), and came close to death at least twice.

This is quite interesting in itself, but especially so since both cortisone and testosterone can alter mood and behaviour. In high doses, cortisone can produce mood swings, agitation and mania, and with prolonged use, depression; while testosterone... well, it's testosterone. In theory, Kennedy only needed to take enough of these hormones to achieve normal levels, but in fact, Owen says, for long periods of his Presidency he was taking much more than that, partly because doctors in the 1960s tended to use higher doses than would now be considered wise, and partly because he was being simultanously treated by a number of doctors who didn't always know what the others were doing (seriously.) Allegedly, some photos of Kennedy show symptoms of excessive cortisol levels ("Cushingoid features") such as a puffy face, although I haven't checked this. (This picture shows signs of Addison's disease - low weight and dark skin - before he was treated).

Most interestingly for drug fans, Owen says that Kennedy was a regular user of amphetamine ("speed"), which he was given by Dr Max "Dr Feelgood" Jacobson, who was essentially a high-class quack, although a very popular one. Jacobson was a methamphetamine user himself and he was eventually banned from practicing medicine in 1975. Jacobson gave Kennedy injections of amphetamine and steroids, and probabky also gave him vials of drugs to inject himself with; on at least one occasion he probably gave him methamphetamine. All of this was perfectly legal, but it was medically unnecessary, and maybe downright dangerous. Kennedy also had injections of Demerol (pethidine) for chronic back pain, a powerful painkiller which pharmacologically is rather like a cross between morphine and cocaine. Fun stuff. Jacobson, however, disappoved of this.

Owen speculates that Kennedy's medication, as well as his general health, contributed to his erratic performance during the first half of his presidency - hence the Bay of Pigs fiasco, and an embarrasingly poor form during a summit with the Soviet leader Khrushchev. Later on, when Kennedy's health situation had improved and he had cut back on the speed and steroids, he was able to handle the Cuban Missile Crisis very effectively and, probably, saved the world. A skeptic would say that Kennedy might have just learned from his mistakes of course, but Owen's theory is certainly possible, and it's worth bearing in mind when thinking about the possibility of the widespread use of "cognitive enhancers" - most of these drugs are stimulants with effects on mood and judgement. So remember - unless you want to preside over an abortive, ill-planned invasion of a small third-world country, keep away from speed.

(*)Mini-book-review: In Sickness and in Power is interesting but badly flawed; it mixes sound history with fluffy speculation seemingly at random. Would French President Mitterand have acted differently on the War in Yugoslavia if he hadn't had cancer? Would the Shah of Iran have been forced to step down earlier if he had admitted to having leukemia? Possibly - but we really don't know. Owen spends a lot of time wondering about such hypothetical questions. He has also invented a new psychiatric diagnosis, "hubris syndrome", complete with a DSM-IV style symptom checklist, with which he proceeds to diagnose people like Tony Blair and George Bush on the basis that they made bad decisions about Iraq. Fair enough, but I've have preferred to hear more about Nixon and Bush's alcoholism or about Winston Churchill's depression, which are discussed, but only briefly.

Sunday, December 21, 2008

A Gene for Power-Line Leukemia?

Some people believe that living near high-voltage power lines raises the risk of childhood cancer. Most people are skeptical. A Chinese group have just published a paper in the journal Leukemia and Lymphoma, claiming that a genetic polymorphism in the XRCC1 gene, which has been previously linked to various cancers, raises the risk of electromagnetic field (EMF)-related leukemia. People who believe in EMF-related leukemia are happy. The Daily Mail report on this study quoting no less than three such people.

What's the real story? The authors took 123 childhood leukemia patients living near Shanghai. They took blood samples for DNA analysis and asked the parents to report on a wide range of possible environmental risk factors, not just EMF:
The mothers of the patients were interviewed at the hospital by specifically trained medical doctors using a questionnaire. Visits to the present (or previous) residential areas of 66 cases were arranged, and the actual values of magnetic field intensities were measured using an EMF detector (TriField Meter, AlphaLab, USA). Questionnaires covered information about the parents’ sociodemographic characteristics, the children’s pre and postnatal characteristics and the familial history of cancer and autoimmune diseases. The questions related to environmental exposure covered pregnancy and the period from birth to diagnosis and detailed information including: Was there a television set/refrigerator/ microwave oven in the children’s rooms? Did you regularly use insecticides at home? Did you use gardening chemicals such as, fertilisers, herbicides, insecticides, fungicides, others? Were there chemical factories/telecommunication transmitters/electric transformers/power lines around your house?
Relying on self-report like this raises the risk of recall bias, but to be honest, this doesn't seem like a major problem. Certainly there is a much bigger problem with this study (see below). The authors genotyped the children for six different SNPs (genetic variants) which have been previously implicated in cancer
The MassARRAY technology platform (Sequenom, San Diego, California, USA) was used to detect the SNPs in hMLH1 Ex8–23A4G (rs1799977), APEX1 Ex5þ5T4G (rs1130409), MGMTEx7þ 13A4G (rs2308321), XRCC1 Ex9þ16G4A (rs25489), XPD Ex10–16G4A (rs1799793) and XPD Ex23þ61 T4G (rs13181)
See the problem that's developing here? Six SNPs, who knows how many different environmental factors (the paper isn't clear, but it seems to be at least seven, see below) - that's a textbook example of multiple comparisons. Any statistical comparison has a chance of giving a positive result just by chance. If you do enough comparisons, you will find something, just by chance.

The authors do not report making an attempt to correct for this (although there are plenty of ways of doing so). They never even acknowledge the problem. They simply report on their only positive result - an association between the XRCC1 risk allele and "proximity to electrical transformers and power lines" - and relegate all the negative results to a brief summary
No significant interactions between the proximity of the electric transformers and power lines and other genotypes were observed. No significant interactions were observed between genotypes and the presence of television sets, refrigerators or microwave ovens in children’s rooms, pesticides use or the presence of chemical factories or telecommunication transmitter within 500 m of the houses.
The positive result was that out of the children with leukemia, those living within 100m of electrical transformers and power lines were more likely to carry the XRCC1 risk allele than those not living within this proximity. Those living within 50m were slightly more likely than that. Under the assumption that genotype is not correlated with environment in the general population (a reasonable assumption, and they did test this in a control sample), this indicates a G x E interaction for leukemia / lymphoma risk, with p below 0.01.

One such result from what seems like at least 42 such comparisons is not especially impressive. It's certainly not proof of an interaction between XRCC1 and EMF, it's not even "suggestive evidence", it's at best a prompt for further research. Even being generous, and assuming that they would not have reported on an association with any risk factor other than proximity to power lines, this is still 6 comparisons with different polymorphisms (more if you count the fact that children living at differing distances from power lines were tested seperately).

Postscript: I hope that I'm wrong about this. It would be great if XRCC1 raised the risk of childhood cancer, because it would mean that we could prevent some childhood cancers by keeping at-risk children away from power-lines. This post is just something I hacked together in an hour and a half on a Sunday morning, and I'm not a statistician - it would be awful if I've just spotted a serious problem with an important paper which went un-noticed by journal editors and peer reviewers. So if someone wants to disagree with me please, please do - I'll provide the PDF of the paper on request if you need it. Until then, I think that this is especially bad example of the problem of multiple comparisons and a tragic case of sloppy science which could end up having serious consequences for health, in terms of acting as a red herring distracting from more valuable research.

[BPSDB]

ResearchBlogging.orgYou Yang, Xingming Jin, Chonghuai Yan, Ying Tian, Jingyan Tang, Xiaoming Shen (2008). Case-only study of interactions between DNA repair genes (hMLH1, APEX1, MGMT, XRCC1 and XPD) and low-frequency electromagnetic fields in childhood acute leukemia Leukemia and Lymphoma, 49 (12), 2344-2350 DOI: 10.1080/10428190802441347

A Gene for Power-Line Leukemia?

Some people believe that living near high-voltage power lines raises the risk of childhood cancer. Most people are skeptical. A Chinese group have just published a paper in the journal Leukemia and Lymphoma, claiming that a genetic polymorphism in the XRCC1 gene, which has been previously linked to various cancers, raises the risk of electromagnetic field (EMF)-related leukemia. People who believe in EMF-related leukemia are happy. The Daily Mail report on this study quoting no less than three such people.

What's the real story? The authors took 123 childhood leukemia patients living near Shanghai. They took blood samples for DNA analysis and asked the parents to report on a wide range of possible environmental risk factors, not just EMF:
The mothers of the patients were interviewed at the hospital by specifically trained medical doctors using a questionnaire. Visits to the present (or previous) residential areas of 66 cases were arranged, and the actual values of magnetic field intensities were measured using an EMF detector (TriField Meter, AlphaLab, USA). Questionnaires covered information about the parents’ sociodemographic characteristics, the children’s pre and postnatal characteristics and the familial history of cancer and autoimmune diseases. The questions related to environmental exposure covered pregnancy and the period from birth to diagnosis and detailed information including: Was there a television set/refrigerator/ microwave oven in the children’s rooms? Did you regularly use insecticides at home? Did you use gardening chemicals such as, fertilisers, herbicides, insecticides, fungicides, others? Were there chemical factories/telecommunication transmitters/electric transformers/power lines around your house?
Relying on self-report like this raises the risk of recall bias, but to be honest, this doesn't seem like a major problem. Certainly there is a much bigger problem with this study (see below). The authors genotyped the children for six different SNPs (genetic variants) which have been previously implicated in cancer
The MassARRAY technology platform (Sequenom, San Diego, California, USA) was used to detect the SNPs in hMLH1 Ex8–23A4G (rs1799977), APEX1 Ex5þ5T4G (rs1130409), MGMTEx7þ 13A4G (rs2308321), XRCC1 Ex9þ16G4A (rs25489), XPD Ex10–16G4A (rs1799793) and XPD Ex23þ61 T4G (rs13181)
See the problem that's developing here? Six SNPs, who knows how many different environmental factors (the paper isn't clear, but it seems to be at least seven, see below) - that's a textbook example of multiple comparisons. Any statistical comparison has a chance of giving a positive result just by chance. If you do enough comparisons, you will find something, just by chance.

The authors do not report making an attempt to correct for this (although there are plenty of ways of doing so). They never even acknowledge the problem. They simply report on their only positive result - an association between the XRCC1 risk allele and "proximity to electrical transformers and power lines" - and relegate all the negative results to a brief summary
No significant interactions between the proximity of the electric transformers and power lines and other genotypes were observed. No significant interactions were observed between genotypes and the presence of television sets, refrigerators or microwave ovens in children’s rooms, pesticides use or the presence of chemical factories or telecommunication transmitter within 500 m of the houses.
The positive result was that out of the children with leukemia, those living within 100m of electrical transformers and power lines were more likely to carry the XRCC1 risk allele than those not living within this proximity. Those living within 50m were slightly more likely than that. Under the assumption that genotype is not correlated with environment in the general population (a reasonable assumption, and they did test this in a control sample), this indicates a G x E interaction for leukemia / lymphoma risk, with p below 0.01.

One such result from what seems like at least 42 such comparisons is not especially impressive. It's certainly not proof of an interaction between XRCC1 and EMF, it's not even "suggestive evidence", it's at best a prompt for further research. Even being generous, and assuming that they would not have reported on an association with any risk factor other than proximity to power lines, this is still 6 comparisons with different polymorphisms (more if you count the fact that children living at differing distances from power lines were tested seperately).

Postscript: I hope that I'm wrong about this. It would be great if XRCC1 raised the risk of childhood cancer, because it would mean that we could prevent some childhood cancers by keeping at-risk children away from power-lines. This post is just something I hacked together in an hour and a half on a Sunday morning, and I'm not a statistician - it would be awful if I've just spotted a serious problem with an important paper which went un-noticed by journal editors and peer reviewers. So if someone wants to disagree with me please, please do - I'll provide the PDF of the paper on request if you need it. Until then, I think that this is especially bad example of the problem of multiple comparisons and a tragic case of sloppy science which could end up having serious consequences for health, in terms of acting as a red herring distracting from more valuable research.

[BPSDB]

ResearchBlogging.orgYou Yang, Xingming Jin, Chonghuai Yan, Ying Tian, Jingyan Tang, Xiaoming Shen (2008). Case-only study of interactions between DNA repair genes (hMLH1, APEX1, MGMT, XRCC1 and XPD) and low-frequency electromagnetic fields in childhood acute leukemia Leukemia and Lymphoma, 49 (12), 2344-2350 DOI: 10.1080/10428190802441347

Friday, December 19, 2008

The Lonely Grave of Galileo Galilei

Galileo would be turning in his grave. His achievement was to set science on the course which has made it into an astonishingly successful means of generating knowledge. Yet some people not only reject the truths of the science that Galileo did so much to advance; they do it in his name.

Intro: In Denial?

Scientific truth is increasingly disbelieved, and this is a new phenomenon, so much so that new words have been invented to describe it. Leah Ceccarelli defines manufacturoversy as a public controversy over some question (usually scientific) which is not considered by experts on the topic to be in dispute; the controversy is not a legitimate scientific debate but a PR tool created by commercial or ideological interests.

Probably the best example is the attempts by tobacco companies to cast doubt on the association between tobacco smoking and cancer. The techniques involved are now well known. The number of smokers who didn't quit smoking because there was "doubt" over the link with cancer is less clear. More recently, there have been energy industry-sponsored attempts to do the same to the science on anthropogenic global warming. Other cases often cited are the MMR-autism link, Intelligent Design, and HIV/AIDS denial, although the agendas behind these "controversies" are less about money and more about politics and cultural warfare.

Many manufacturoversies are also examples of denialism, which Wikipedia defines as
the position of governments, political parties, business groups, interest groups, or individuals who reject propositions on which a scientific or scholarly consensus exists
although the two terms are not synonymous; one could be a denialist without having any ulterior motives, while conversely, one could manufacture a controversy which did not involve denying anything (e.g. the media-manufactured MMR-causes-autism theory, while completely wrong, didn't contradict any established science, it was just an assertion with no evidence and plenty of reasons to think it was wrong.) Denialism is very often accompanied by invokations of Galileo (or occasionally other "rebel scientists"), in an attempt to rhetorically paint the theory under attack as no more than an established dogma.

Just a caveat: in the wrong hands, the concepts of manufacturoversy and denialism could become a means of rubbishing legitimate dissent. The slogan of the denialism blog is "Don't mistake denialism for debate", but the line is sometimes very fine(*). For example, I'm critical of the idea that psychiatric medications and electroconvulsive therapy are of little or no benefit to patients. If one wanted to, it would be possible to make a coherent-sounding case as to why this debate was a manufacturoversy on the part of the psychotherapy industry to undermine confidence in a competing form of treatment which is overwhelmingly supported by the scientific evidence. This would be wrong (mostly).

A History of Error

Anyway. What's interesting is that the idea of inappropriate or manufactured doubt about scientific or historical claims is a very new phenomenon. Indeed, it's very hard to think of any examples before 1950, with the possible exception of the first wave of Creationism in the 1920s. Leah Ceccarelli points out that many of the rhetorical tricks used go back to the Greek Sophists but until recently the concept of denialism would have been almost meaningless, for the simple reason that this requires a truth to be inappropriately called into question and before about the 19th century, to a first approximation, we didn't have access to any such truths.

It's easy to forget just how ignorant we were until recently. The average schoolkid today has a more accurate picture of the universe than the greatest genius of 500 years ago, or of 300 years ago, and even of 100 years ago (assuming that the schoolkid knows about the Big Bang, plate tectonics, and DNA - all 20th century discoveries).

To exaggerate, but not very much: until the last couple of centuries of human history, no-one correctly believed in anything, and people had many beliefs that were actively wrong - they believed in ghosts, and witches, and Hiranyagarbha, and Penglai. People erred by believing. Those who disbelieved were likely to be right.

Things have changed. There is more knowledge now; today, when people err, it is increasingly because they reject the truth. No-one in the West now believes in witches, but hundreds of millions of us don't believe that the visible universe originated in a singularity about 13.5 billion years ago, although this is arguably a much bigger mistake to make. In other words, whereas in the past the main problem was belief in false ideas ("dogma"); increasingly the problem is doubting true ones ("denialism").

Myths & Legends of Science

The problem is that the way most people think about science hasn't caught up with the pace of scientific change. In just a couple of hundred years, science has gone from being an assortment of separate, largely bad notions, to being a vast construct of interlinking and mutually supporting theories, the foundations of which are supported by mountains of evidence. Yet all of our most popular myths about science are Robin Hood stories - the hero is the underdog, the rebel, the Maverick who stands up to authority, battles the entrenched beliefs of the Establishment, and challenges dogma. In other words, the hero is a denialist - albeit one who turns out to be right.

Once, this was realistic. Galileo was an Aristotelean cosmology denier; Pasteur was a miasma theory denier; Einstein was a Newtonian physics denier. (In fact, the historical facts are a bit more complicated, as they often are, but this is true enough.) But these stories are out of date. Thanks to the great deniers of the past, there are few, if any, inappropriate dogmas in mainstream science. There, I said it. Thanks to the efforts of scientists past and present, science has become a professional activity with, generally, a very good success rate.

The HIV/AIDS hypothesis and anti-retroviral drugs were developed by orthodox career scientists with proper qualifications working within the mainstream of biology and medicine. They probably wore boring, conventional white coats. There were no exciting paradigm shifts in HIV science. There was no Galileo of HIV; there was Robert Gallo. Yet orthodox science has been successful in delivering treatments for HIV and understanding of the disease (anti-retrovirals are not perfect, but they're a hell of a lot better than untreated AIDS, and just 20 years ago that was what all patients faced.) The skeptics, the rebels, the Robin Hoods of HIV/AIDS - they have been a disaster. If global warming deniers succeed, the consequences will be much worse.

Of coure, we do still need intelligent rebels. It would be a foolhardy person(**) who predicted that there will never be another paradigm shifts in science; neuroscience, at least, is due at least one more and there are parts of the remoter provinces of science, such as behavioural genetics, which are in serious need of a critical eye. But the vast majority of modern science, unlike the science of the past, is actually quite good. Hence, rebels are most likely wrong. To make a foolhardy prediction: there will never be another Galileo in the sense of a single figure who denies the scientific consensus and turns out to be right. There can only be a finite number of Galileos in history - once one succeeds in reforming some field, there is no need for another - and we may well have run out. My previous post on this topic included the bold claim that
if most scientists believe something you probably should believe it, just because scientists say so.
Yet this wasn't always true. To pluck a nice round number out of the air, I'd say that science has only been this trustworthy for 50 years. Most of our myths and ideas about science date from before that era. Science has moved on since the time of Galileo, thanks to his efforts and those of they who came after him, but he is still invoked as a hero by those who deny scientific truth. He would be turning in his grave, in the earth which, as we now know, turns around the sun.

(*) and of course as we know, "it's such a fine line between stupid and clever".
(**) As foolhardy as Francis Fukuyama who in 1989 proclaimed that history had ended and that the world was past the era of ideological struggles.

[BPSDB]

The Lonely Grave of Galileo Galilei

Galileo would be turning in his grave. His achievement was to set science on the course which has made it into an astonishingly successful means of generating knowledge. Yet some people not only reject the truths of the science that Galileo did so much to advance; they do it in his name.

Intro: In Denial?

Scientific truth is increasingly disbelieved, and this is a new phenomenon, so much so that new words have been invented to describe it. Leah Ceccarelli defines manufacturoversy as a public controversy over some question (usually scientific) which is not considered by experts on the topic to be in dispute; the controversy is not a legitimate scientific debate but a PR tool created by commercial or ideological interests.

Probably the best example is the attempts by tobacco companies to cast doubt on the association between tobacco smoking and cancer. The techniques involved are now well known. The number of smokers who didn't quit smoking because there was "doubt" over the link with cancer is less clear. More recently, there have been energy industry-sponsored attempts to do the same to the science on anthropogenic global warming. Other cases often cited are the MMR-autism link, Intelligent Design, and HIV/AIDS denial, although the agendas behind these "controversies" are less about money and more about politics and cultural warfare.

Many manufacturoversies are also examples of denialism, which Wikipedia defines as
the position of governments, political parties, business groups, interest groups, or individuals who reject propositions on which a scientific or scholarly consensus exists
although the two terms are not synonymous; one could be a denialist without having any ulterior motives, while conversely, one could manufacture a controversy which did not involve denying anything (e.g. the media-manufactured MMR-causes-autism theory, while completely wrong, didn't contradict any established science, it was just an assertion with no evidence and plenty of reasons to think it was wrong.) Denialism is very often accompanied by invokations of Galileo (or occasionally other "rebel scientists"), in an attempt to rhetorically paint the theory under attack as no more than an established dogma.

Just a caveat: in the wrong hands, the concepts of manufacturoversy and denialism could become a means of rubbishing legitimate dissent. The slogan of the denialism blog is "Don't mistake denialism for debate", but the line is sometimes very fine(*). For example, I'm critical of the idea that psychiatric medications and electroconvulsive therapy are of little or no benefit to patients. If one wanted to, it would be possible to make a coherent-sounding case as to why this debate was a manufacturoversy on the part of the psychotherapy industry to undermine confidence in a competing form of treatment which is overwhelmingly supported by the scientific evidence. This would be wrong (mostly).

A History of Error

Anyway. What's interesting is that the idea of inappropriate or manufactured doubt about scientific or historical claims is a very new phenomenon. Indeed, it's very hard to think of any examples before 1950, with the possible exception of the first wave of Creationism in the 1920s. Leah Ceccarelli points out that many of the rhetorical tricks used go back to the Greek Sophists but until recently the concept of denialism would have been almost meaningless, for the simple reason that this requires a truth to be inappropriately called into question and before about the 19th century, to a first approximation, we didn't have access to any such truths.

It's easy to forget just how ignorant we were until recently. The average schoolkid today has a more accurate picture of the universe than the greatest genius of 500 years ago, or of 300 years ago, and even of 100 years ago (assuming that the schoolkid knows about the Big Bang, plate tectonics, and DNA - all 20th century discoveries).

To exaggerate, but not very much: until the last couple of centuries of human history, no-one correctly believed in anything, and people had many beliefs that were actively wrong - they believed in ghosts, and witches, and Hiranyagarbha, and Penglai. People erred by believing. Those who disbelieved were likely to be right.

Things have changed. There is more knowledge now; today, when people err, it is increasingly because they reject the truth. No-one in the West now believes in witches, but hundreds of millions of us don't believe that the visible universe originated in a singularity about 13.5 billion years ago, although this is arguably a much bigger mistake to make. In other words, whereas in the past the main problem was belief in false ideas ("dogma"); increasingly the problem is doubting true ones ("denialism").

Myths & Legends of Science

The problem is that the way most people think about science hasn't caught up with the pace of scientific change. In just a couple of hundred years, science has gone from being an assortment of separate, largely bad notions, to being a vast construct of interlinking and mutually supporting theories, the foundations of which are supported by mountains of evidence. Yet all of our most popular myths about science are Robin Hood stories - the hero is the underdog, the rebel, the Maverick who stands up to authority, battles the entrenched beliefs of the Establishment, and challenges dogma. In other words, the hero is a denialist - albeit one who turns out to be right.

Once, this was realistic. Galileo was an Aristotelean cosmology denier; Pasteur was a miasma theory denier; Einstein was a Newtonian physics denier. (In fact, the historical facts are a bit more complicated, as they often are, but this is true enough.) But these stories are out of date. Thanks to the great deniers of the past, there are few, if any, inappropriate dogmas in mainstream science. There, I said it. Thanks to the efforts of scientists past and present, science has become a professional activity with, generally, a very good success rate.

The HIV/AIDS hypothesis and anti-retroviral drugs were developed by orthodox career scientists with proper qualifications working within the mainstream of biology and medicine. They probably wore boring, conventional white coats. There were no exciting paradigm shifts in HIV science. There was no Galileo of HIV; there was Robert Gallo. Yet orthodox science has been successful in delivering treatments for HIV and understanding of the disease (anti-retrovirals are not perfect, but they're a hell of a lot better than untreated AIDS, and just 20 years ago that was what all patients faced.) The skeptics, the rebels, the Robin Hoods of HIV/AIDS - they have been a disaster. If global warming deniers succeed, the consequences will be much worse.

Of coure, we do still need intelligent rebels. It would be a foolhardy person(**) who predicted that there will never be another paradigm shifts in science; neuroscience, at least, is due at least one more and there are parts of the remoter provinces of science, such as behavioural genetics, which are in serious need of a critical eye. But the vast majority of modern science, unlike the science of the past, is actually quite good. Hence, rebels are most likely wrong. To make a foolhardy prediction: there will never be another Galileo in the sense of a single figure who denies the scientific consensus and turns out to be right. There can only be a finite number of Galileos in history - once one succeeds in reforming some field, there is no need for another - and we may well have run out. My previous post on this topic included the bold claim that
if most scientists believe something you probably should believe it, just because scientists say so.
Yet this wasn't always true. To pluck a nice round number out of the air, I'd say that science has only been this trustworthy for 50 years. Most of our myths and ideas about science date from before that era. Science has moved on since the time of Galileo, thanks to his efforts and those of they who came after him, but he is still invoked as a hero by those who deny scientific truth. He would be turning in his grave, in the earth which, as we now know, turns around the sun.

(*) and of course as we know, "it's such a fine line between stupid and clever".
(**) As foolhardy as Francis Fukuyama who in 1989 proclaimed that history had ended and that the world was past the era of ideological struggles.

[BPSDB]

Saturday, December 13, 2008

We Really Are Sorry, But Your Soul is Still Dead

Over the past few weeks, Christian neurosurgeon Michael Egnor, who writes on Evolution News & Views, and atheist neurologist Steve Novella (Neurologica) have been having an, er, vigorous debate about what neuroscience can tell us about materialism and the soul. As reported in New Scientist, this is part of an apparant attempt to undermine the materialist position (that all mental processes are the product of neural processes), on the part of the same people who brought you Intelligent Design. Many are calling it the latest front in the Culture War.

A couple of days ago Denyse O'Leary, a Canadian journalist who writes the blog Mindful Hack(*), posted some comments from Egnor about the great Wilder Penfield and his idea of "double consciousness" (my emphasis)
[By stimulating points on the cerebral cortex with electrodes during surgery] Penfield found that he could invoke all sorts of things- movements, sensations, memories. But in every instance ... the patients were aware that the stimulation was being done to them, but not by them. There was a part of the mind that was independent of brain stimulation and that constituted a part of subjective experience that Penfield was not able to manipulate with his surgery.... Penfield called this "double consciousness", meaning that there was a part of subjective experience that he could invoke or modify materially, and a different part that was immune to such manipulation.
I generally find arguing about religion boring, and I've no wish to enlist in any Culture Armies (I'm British - we're a nation of Culture Pacifists), but I'm going to say something about this, because it's just bad neuroscience. Maybe there are good arguments against materialism, but this isn't one.

Unfortunately, neither O'Leary nor Egnor allow comments on their blogs, but immediately after posting this I emailed them both with a link to this post. We'll see what happens.

Anyway, Penfield, whom you can read about in great detail at Neurophilosophy, was a pioneer in the functional mapping of the cerebral cortex. He was a neurosurgeon, and as part of his surgical procedures he would systematically stimulate different points of the cerebral cortex with an electrode, so as to locate which areas were responsible for important functions and avoid damaging them. Michael Egnor, following Penfield, is correct that this kind of point stimulation of the cortex tends to evoke sensations or motor responses which are experienced by the patient as external. Point stimulation is not reported to be able to effect our "higher" mental faculties such as our beliefs, desires, decisions, and "will"; it might evoke a movement of the arm, say, but the subject will report that this felt like an involuntary reflex, not a willed action.

However, to take this as evidence for some kind of a dualism between a form of conciousness which can be manipulated via the brain and another, non-material level of conciousness which can't (the "soul" in other words), is like saying that because hammering away at one key of a piano produces nothing but an annoying noise, there must be something magical going on when a pianist plays a Mozart concerto. Stimulating a single small part of the brain is about the crudest manipulation imaginable; all we can conclude from the results of point-stimulation experiments is that some kinds of mental processes are not controlled by single points on the cortex. This should not be surprising, since the brain is a network of 100 billion cells; what's interesting, in fact, is that stimulating a few million of these cells with the tip of an electrode can do anything.

Neuroskeptic is frequently critical of fMRI, but one of my favorite papers is an fMRI study, Reading Hidden Intentions in the Human Brain. In this experiment the authors got volunteers to freely decide on one of two courses of action several seconds before they were required to actually do the chosen act. (It was deciding betweening adding vs. subtracting two numbers on a screen.) They discovered that it was possible to determine (albeit with less than 100% accuracy) what subjects were planning to do on any given trial, before they actually did it, through an analysis of the pattern of neural activity across a large area of the medial prefrontal cortex.

The green area on this image shows the area over which activity predicts the future action. Importantly, no one point on the cortex is associated with one choice over another, but the combination of activity across the whole area is (once you put it through some brilliant maths).

Based on this evidence, it's reasonable to suppose that we could manipulate human intentions if, instead of just one electrode, we had several thousand (or million), and if we knew exactly which pattern of stimulation to apply. Or to run with the piano analogy: we could play a wonderful tune if we were skilled enough to play the right notes in the right combinations in the right order.

In fact, there are plenty of things which already are known to alter "higher" processes. At the correct doses, acetylcholine receptor antagonists such as scopolamine and atropine can produce a state of delerium with hallucinations which are experienced as being indistingishable from reality. Someone might talk to a non-existent friend or try to smoke a non-existent cigarette, without any knowledge of having taken a drug at all. Erowid has many first-hand accounts from people who have taken such drugs "recreationally" (a very bad idea, as you'll gather if you read a few.)

Then there's psychiatric illness. Someone who's psychotic may hear voices and believe them to be real communications from God, or the dead, or a radio transmitter in his head. A bipolar patient in a manic state may believe herself to have incredible talents or supernatural powers and dismiss as nonsense any suggestion that this is a result of her illness. In general those suffering from acute abnormal mental states may behave in a manner which is completely out of character, or think and talk in bizarre ways, without being aware of doing so. This is called "lacking in insight".

We don't yet know the neurobiological basis of these states, but that they (often) have one is beyond doubt; give the appropriate drugs - or use electricity to induce seizures - and they (usually) vanish. Many people in the advanced phases of dementia, especially Alzheimer's disease, as a result of neurodegeneration, are similarly unaware of being ill - hence the sad sight of formerly intelligent men and women wandering the streets, not knowing how they got there. Brain damage, or stimulation of deep brain structures (not the cortex which Penfield studied), can lead to profound alterations in personality and emotion. To summarize - if you seek the soul in the data of neuroscience, you will need to look harder than Penfield did.

Links : Sorry, But Your Soul Just Died - Tom Wolfe. A classic.

(*) - Mindful Hack - not to be confused with Mind Hacks.

[BPSDB]