Friday, January 30, 2009

Will Coffee Crack your Chromosomes?

Bloggers were amused by the Daily Mail's latest crap science article - a scary cancer story about research that hadn't even been done yet. The article is about a study to be conducted by some University of Leicester scientists, which will investigate whether coffee intake by pregnant women is correlated with DNA changes in babies, similar to those seen in leukemia. In other words: coffee-drinking might be associated with some molecular changes which might point to a risk of leukemia. We should ban the stuff, clearly.

What did scare me though was this line:
Previous research has shown that caffeine damages DNA, cutting cells’ ability to fight off cancer triggers such as radiation.
Hold on, caffeine is genotoxic? That would be pretty worrying. It wouldn't mean that coffee causes cancer, but it would make it highly plausible. But does caffeine in fact damage DNA? That might sound like a simple question to answer. Sadly not. It turns out that caffeine is one of the most researched chemicals in all of genotoxicology, and after over 1000 studies there's no consensus on what, if anything, it does to DNA. The story is remarkably complex and has all the good elements of a scientific intrigue. This review by Steven D'Ambrosio , for example, convincingly argues that:
A number of [genotoxic] effects have been observed [in the lab]. However, they usually appear after very high doses (> 1 mM) of caffeine in combination with genotoxins, and are usually specific to certain cell types and/or cellular parameters. Humans, on the other hand, consume much less caffeine in the diet...thus, it is difficult to implicate caffeine, even at the highest levels of dietary consumption, as a genotoxin to humans.
That's a relief. But right at the end we find that "This work was supported by the National Coffee Association"! If the author was in the pocket of Big Java, how can we trust him? Was he being bribed, perhaps with sacks of top-grade Columbian beans...? There's good evidence that high concentrations of caffeine can enhance the DNA damage produced by genotoxic agents such as radiation. But most of these experiments used caffeine concentrations hundreds of times higher than most coffee-drinkers are likely to experience. And contrary to the Mail's claim, this doesn't mean that coffee damages DNA - it probably works by deregulating the cell replication cycle to prevent DNA repair, which means that in theory, caffeine could even make cancer cells more vulnerable to chemotherapy (but again, only at extreme doses.) There's little epidemiological evidence of any association between coffee drinking and cancer; what evidence there is seems to suggest that coffee might even protect against some cancers...

Still, one comforting lesson from all this is that it's not just neuroscience in which seemingly simple questions (like is there are an area of the brain for recognizing faces?) can turn out to be much more complicated than one might hope...

ResearchBlogging.orgS Dambrosio (1994). Evaluation of the Genotoxicity Data on Caffeine Regulatory Toxicology and Pharmacology, 19 (3), 243-281 DOI: 10.1006/rtph.1994.1023

Will Coffee Crack your Chromosomes?

Bloggers were amused by the Daily Mail's latest crap science article - a scary cancer story about research that hadn't even been done yet. The article is about a study to be conducted by some University of Leicester scientists, which will investigate whether coffee intake by pregnant women is correlated with DNA changes in babies, similar to those seen in leukemia. In other words: coffee-drinking might be associated with some molecular changes which might point to a risk of leukemia. We should ban the stuff, clearly.

What did scare me though was this line:
Previous research has shown that caffeine damages DNA, cutting cells’ ability to fight off cancer triggers such as radiation.
Hold on, caffeine is genotoxic? That would be pretty worrying. It wouldn't mean that coffee causes cancer, but it would make it highly plausible. But does caffeine in fact damage DNA? That might sound like a simple question to answer. Sadly not. It turns out that caffeine is one of the most researched chemicals in all of genotoxicology, and after over 1000 studies there's no consensus on what, if anything, it does to DNA. The story is remarkably complex and has all the good elements of a scientific intrigue. This review by Steven D'Ambrosio , for example, convincingly argues that:
A number of [genotoxic] effects have been observed [in the lab]. However, they usually appear after very high doses (> 1 mM) of caffeine in combination with genotoxins, and are usually specific to certain cell types and/or cellular parameters. Humans, on the other hand, consume much less caffeine in the diet...thus, it is difficult to implicate caffeine, even at the highest levels of dietary consumption, as a genotoxin to humans.
That's a relief. But right at the end we find that "This work was supported by the National Coffee Association"! If the author was in the pocket of Big Java, how can we trust him? Was he being bribed, perhaps with sacks of top-grade Columbian beans...? There's good evidence that high concentrations of caffeine can enhance the DNA damage produced by genotoxic agents such as radiation. But most of these experiments used caffeine concentrations hundreds of times higher than most coffee-drinkers are likely to experience. And contrary to the Mail's claim, this doesn't mean that coffee damages DNA - it probably works by deregulating the cell replication cycle to prevent DNA repair, which means that in theory, caffeine could even make cancer cells more vulnerable to chemotherapy (but again, only at extreme doses.) There's little epidemiological evidence of any association between coffee drinking and cancer; what evidence there is seems to suggest that coffee might even protect against some cancers...

Still, one comforting lesson from all this is that it's not just neuroscience in which seemingly simple questions (like is there are an area of the brain for recognizing faces?) can turn out to be much more complicated than one might hope...

ResearchBlogging.orgS Dambrosio (1994). Evaluation of the Genotoxicity Data on Caffeine Regulatory Toxicology and Pharmacology, 19 (3), 243-281 DOI: 10.1006/rtph.1994.1023

Wednesday, January 28, 2009

I'm So Depressed

We use the word depression to refer to a wide range of states of mind, from severe "clinical depression" to just feeling a bit miserable. A "depressing movie" is not one which is going to make you clinically depressed if you watch it.

But the words "mania" and "psychosis" are not like this. People don't often talk about being manic when they're happy - I've heard people describe themselves as "a bit manic", but the bit makes all of the difference. People do use these words wrongly, e.g. some people seem to use "psychotic" when they mean "psychopathic". But even so, these words are always associated with abnormality and pathology. Depression is talked about as "normal" in a way in which mania and psychosis aren't.

This is misleading. True, depression can be hard to distinguish from sadness, stress, ennui, angst and other emotions. But it is a mistake to think that clinical depression is nothing more than a kind of inappropriate or excessive sadness. Being manic is not just being very happy, even if feeling very happy is one of the aspects of mania in some people (but not in all). Depression is not just feeling very sad. In fact, depression can be much more like mania and psychosis than most people tend to think.

In my experience of depression, it's little like sadness. Most people that I've spoken to who have suffered from depression agree; the distinctive thing about depression in most cases seems to be a feeling of lack, or a lack of feeling, in which things lose their value and worth. Textbooks call this anhedonia, a lack of pleasure, which is as good a description as any. Whereas, if you're sad about something, at least you value it.

It's interesting to imagine what things would be like if depression were today a word like mania, as it was 50 years ago.

I'm So Depressed

We use the word depression to refer to a wide range of states of mind, from severe "clinical depression" to just feeling a bit miserable. A "depressing movie" is not one which is going to make you clinically depressed if you watch it.

But the words "mania" and "psychosis" are not like this. People don't often talk about being manic when they're happy - I've heard people describe themselves as "a bit manic", but the bit makes all of the difference. People do use these words wrongly, e.g. some people seem to use "psychotic" when they mean "psychopathic". But even so, these words are always associated with abnormality and pathology. Depression is talked about as "normal" in a way in which mania and psychosis aren't.

This is misleading. True, depression can be hard to distinguish from sadness, stress, ennui, angst and other emotions. But it is a mistake to think that clinical depression is nothing more than a kind of inappropriate or excessive sadness. Being manic is not just being very happy, even if feeling very happy is one of the aspects of mania in some people (but not in all). Depression is not just feeling very sad. In fact, depression can be much more like mania and psychosis than most people tend to think.

In my experience of depression, it's little like sadness. Most people that I've spoken to who have suffered from depression agree; the distinctive thing about depression in most cases seems to be a feeling of lack, or a lack of feeling, in which things lose their value and worth. Textbooks call this anhedonia, a lack of pleasure, which is as good a description as any. Whereas, if you're sad about something, at least you value it.

It's interesting to imagine what things would be like if depression were today a word like mania, as it was 50 years ago.

Saturday, January 24, 2009

The British are Incredibly Sad

Or so says Oliver James(*) on this BBC radio show in which he also says things like "I absolutely embraces the credit crunch with both arms".

Oliver James is a British psychologist best known for his theory of "Affluenza". This is his term for unhappiness and mental illness caused, he thinks, by an obsession with money, status and possessions. Affluenza, James thinks, is especially prevanlent in English-speaking countries, because we're more into free-market capitalism than the people of mainland Europe. In fact, he regularly makes the claim that we in Britain, the U.S., Australia etc. are today twice as likely to be mentally ill as "the Europeans". This is because rates of mental illness supposedly surged in the English-speaking world due to 1980s Reagan/Thatcher free market policies. Hence why he welcomes the current economic unpleasantness.

Were all of this true, it would be incredibly important. Certainly important enough to justify writing three books about it and seemingly endless articles for the Guardian. But is it true? Well, this is Neuroskeptic, so you can probably guess. Also, bear in mind that James is someone who is on record as thinking that
[The Tears for Fears song] Mad World. With the chilling line "The dreams in which I'm dying are the best I've ever had", in some respects it is up there with TS Eliot's Prufrock as a poetic account of bourgeois despair.
Obviously poetic taste is entirely subjective etc., but honestly.

Anyway, where did James get the twice-as-bad-as-Europe (or, in some articles, three times as bad) idea from? He says the World Health Organization. Presumably he is referring to one of the World Health Organization's World Mental Health Surveys, such as the analysus presented in this JAMA paper.

At first glance, you can see what he means. This paper reports that the % of people reporting suffering from at least one mental illness over the last year was far higher in the US (26.4%) than in say Italy (8.2%), or Nigeria (4.7%). But on closer inspection, even this data includes some incongruous numbers. Why is Beijing (9.1%) twice as bad as Shanghai (4.3%)? Worse, why does France have a rate of 18.4% while across the border in Germany it's just 9.1%? Are the French twice as materialistic as the Germans? The answer, of course, is that these numbers are more complicated than they appear. In fact, if you believe those figures at face value, you are...well, you're probably Oliver James.

These numbers come from structured interviews, conducted by trained lay researchers, of a random sample of the population. In other words, some guy asked some random people a series of fairly personal questions, reading them off a list, and if they said "Yes" to questions like "Have you ever in your life had a period lasting several days or longer when most of the day you felt sad, empty or depressed?" they might get a tick for "depression". We know this because the interviews used the WHO-CIDI screening questionaire, the first part of which is here.

As part of my own research, I have been that guy asking the questions (in a slightly different context). At some point I'll write about this in more detail, but suffice to say that it's hard to trying to retrospectively diagnose mental illness in someone you've never met before. The potential for denial, mis-remembering, malingering, forgetting or just plain failure to understand the questions is enormous, although it doesn't come across in the final data, which looks lovely and neat.

The authors of the JAMA paper are well aware of this which is why they're skeptical of the apparantly large cross-national differences. In fact, most of their comment section consists of caveats to that effect. Just a few (edited, emphasis mine - see the full paper for more, it's free):
An important limitation of the WMH surveys is their wide variation in response rate. In addition, some of the surveys had response rates below normally accepted standards [i.e. many people refused to participate]... performance of the WMH-CIDI could be worse in other parts of the world either because the concepts and phrases used to describe mental syndromes are less consonant with cultural concepts than in developed Western countries [almost certainly they are] or because absence of a tradition of free speech and anonymous public opinion surveying causes greater reluctance to admit emotional or substance-abuse problems than in developed Western countries. [again, almost certainly, and Europeans are generally more reserved than Americans in this regard.] ... some patterns in the data (e.g. the much lower estimated rate of alcoholism in Ukraine than expected from administrative data documenting an important role of alcoholism in mortality in that country) raise concerns about differential validity.
There's another, more fundamental problem with this data. On any meaningful criterion of "mental illness", a society in which 25% people were mentally ill in any given year would probably collapse. The WHO survey, however, is based on the DSM-IV criteria of mental illness. These are are increasingly regarded as very broad; for example, DSM-IV does not distinguish between feeling miserable & down for two weeks because your boyfriend leaves you, and spending a month in bed hardly eating for no apparant reason. Both are classed as "depression", and hence a "mental illness", although 50 years ago, only the second would have been considered a disease. For someone who styles himself a rebel in the mould of R. D. Laing, it's baffling that James accepts the American Psychiatric Association's dubious criteria.

What other data could we look at? Ideally, we want a measure of mental illness which is meaningful, objective and unambigious. Well, there aren't any, but suicide rates might be the next best thing - they're nice hard numbers which are difficult to fudge (although in cultures in which suicide is strongly taboo, suicides may be reported as deaths from other causes.) Although not everyone who commits suicide is mentally ill, it is fair to say that if Britain really were twice as unhappy as the rest of Europe, we would have a relatively high suicide rate.

What's the data? Well, according to Chishti et. al. (2003) Suicide Mortality in the European Union, we don't.
In fact suicide rates in the UK are boringly middle of the road. They're higher than in places like Greece and Spain, but well below rates in France, Sweden and Germany. Suicide rates are not a direct measure of rates of mental illness, because not everyone who commits suicide is mentally ill, and the rate of succesful suicide depends upon access to lethal means. But does this data look compatible with James's claim that rates of "mental illness" are twice as high in Britain as on "the Continent"? - or indeed with James's implicit assumption that "the Continent" is monolithic?

What's odd is that James clearly knows a bit about suicide, or at least he does now, because just today he wrote a remarkably sensible article about suicide statistics for the Guardian. So he really ought to know better.

Drug sales are another nice, hard number. Of course, medication rates do not equal illness rates - in any field of medicine, but especially psychiatry. Doctors in some countries may be more willing to use drugs, or patients may be more willing to take them. With that in mind, the fact that population-adjusted (source, also here) British sales of antidepressants drugs are twice those of Ireland and Italy, equal to those of Spain, and half those of France, Norway and Sweden does not necessarily mean very much. But it hardly supports James's theory either.

Interestingly, although James holds up Denmark as an example of the kind of happy, "unselfish capitalism" that we should aspire to, the Danes take 50% more antidepressants than we do! (They also have a much higher suicide rate.) True, sales of anxiety drugs and sleeping pills are relatively high in the UK, but still less than Denmark's. Most interestingly, sales of antipsychotics are very low in the UK - roughly the same as in Germany and Italy but less than a quarter of the sales in Ireland and Finland!

So cheer up, Anglos. We're not twice as sad as the French. More likely, we are just more open about talking our problems in the interests of scientific research. However, the French, to their credit, didn't give the world Oliver James.

[BPSDB]

(*) This is Oliver James, psychologist. Not to be confused with: Oliver James, heartthrob actor; Oliver James, Fleet Foxes song, and Oliver James, Ltd.

ResearchBlogging.orgThe WHO World Mental Health Survey Consortium (2004). Prevalence, Severity, and Unmet Need for Treatment of Mental Disorders in the World Health Organization World Mental Health Surveys JAMA: The Journal of the American Medical Association, 291 (21), 2581-2590 DOI: 10.1001/jama.291.21.2581

The British are Incredibly Sad

Or so says Oliver James(*) on this BBC radio show in which he also says things like "I absolutely embraces the credit crunch with both arms".

Oliver James is a British psychologist best known for his theory of "Affluenza". This is his term for unhappiness and mental illness caused, he thinks, by an obsession with money, status and possessions. Affluenza, James thinks, is especially prevanlent in English-speaking countries, because we're more into free-market capitalism than the people of mainland Europe. In fact, he regularly makes the claim that we in Britain, the U.S., Australia etc. are today twice as likely to be mentally ill as "the Europeans". This is because rates of mental illness supposedly surged in the English-speaking world due to 1980s Reagan/Thatcher free market policies. Hence why he welcomes the current economic unpleasantness.

Were all of this true, it would be incredibly important. Certainly important enough to justify writing three books about it and seemingly endless articles for the Guardian. But is it true? Well, this is Neuroskeptic, so you can probably guess. Also, bear in mind that James is someone who is on record as thinking that
[The Tears for Fears song] Mad World. With the chilling line "The dreams in which I'm dying are the best I've ever had", in some respects it is up there with TS Eliot's Prufrock as a poetic account of bourgeois despair.
Obviously poetic taste is entirely subjective etc., but honestly.

Anyway, where did James get the twice-as-bad-as-Europe (or, in some articles, three times as bad) idea from? He says the World Health Organization. Presumably he is referring to one of the World Health Organization's World Mental Health Surveys, such as the analysus presented in this JAMA paper.

At first glance, you can see what he means. This paper reports that the % of people reporting suffering from at least one mental illness over the last year was far higher in the US (26.4%) than in say Italy (8.2%), or Nigeria (4.7%). But on closer inspection, even this data includes some incongruous numbers. Why is Beijing (9.1%) twice as bad as Shanghai (4.3%)? Worse, why does France have a rate of 18.4% while across the border in Germany it's just 9.1%? Are the French twice as materialistic as the Germans? The answer, of course, is that these numbers are more complicated than they appear. In fact, if you believe those figures at face value, you are...well, you're probably Oliver James.

These numbers come from structured interviews, conducted by trained lay researchers, of a random sample of the population. In other words, some guy asked some random people a series of fairly personal questions, reading them off a list, and if they said "Yes" to questions like "Have you ever in your life had a period lasting several days or longer when most of the day you felt sad, empty or depressed?" they might get a tick for "depression". We know this because the interviews used the WHO-CIDI screening questionaire, the first part of which is here.

As part of my own research, I have been that guy asking the questions (in a slightly different context). At some point I'll write about this in more detail, but suffice to say that it's hard to trying to retrospectively diagnose mental illness in someone you've never met before. The potential for denial, mis-remembering, malingering, forgetting or just plain failure to understand the questions is enormous, although it doesn't come across in the final data, which looks lovely and neat.

The authors of the JAMA paper are well aware of this which is why they're skeptical of the apparantly large cross-national differences. In fact, most of their comment section consists of caveats to that effect. Just a few (edited, emphasis mine - see the full paper for more, it's free):
An important limitation of the WMH surveys is their wide variation in response rate. In addition, some of the surveys had response rates below normally accepted standards [i.e. many people refused to participate]... performance of the WMH-CIDI could be worse in other parts of the world either because the concepts and phrases used to describe mental syndromes are less consonant with cultural concepts than in developed Western countries [almost certainly they are] or because absence of a tradition of free speech and anonymous public opinion surveying causes greater reluctance to admit emotional or substance-abuse problems than in developed Western countries. [again, almost certainly, and Europeans are generally more reserved than Americans in this regard.] ... some patterns in the data (e.g. the much lower estimated rate of alcoholism in Ukraine than expected from administrative data documenting an important role of alcoholism in mortality in that country) raise concerns about differential validity.
There's another, more fundamental problem with this data. On any meaningful criterion of "mental illness", a society in which 25% people were mentally ill in any given year would probably collapse. The WHO survey, however, is based on the DSM-IV criteria of mental illness. These are are increasingly regarded as very broad; for example, DSM-IV does not distinguish between feeling miserable & down for two weeks because your boyfriend leaves you, and spending a month in bed hardly eating for no apparant reason. Both are classed as "depression", and hence a "mental illness", although 50 years ago, only the second would have been considered a disease. For someone who styles himself a rebel in the mould of R. D. Laing, it's baffling that James accepts the American Psychiatric Association's dubious criteria.

What other data could we look at? Ideally, we want a measure of mental illness which is meaningful, objective and unambigious. Well, there aren't any, but suicide rates might be the next best thing - they're nice hard numbers which are difficult to fudge (although in cultures in which suicide is strongly taboo, suicides may be reported as deaths from other causes.) Although not everyone who commits suicide is mentally ill, it is fair to say that if Britain really were twice as unhappy as the rest of Europe, we would have a relatively high suicide rate.

What's the data? Well, according to Chishti et. al. (2003) Suicide Mortality in the European Union, we don't.
In fact suicide rates in the UK are boringly middle of the road. They're higher than in places like Greece and Spain, but well below rates in France, Sweden and Germany. Suicide rates are not a direct measure of rates of mental illness, because not everyone who commits suicide is mentally ill, and the rate of succesful suicide depends upon access to lethal means. But does this data look compatible with James's claim that rates of "mental illness" are twice as high in Britain as on "the Continent"? - or indeed with James's implicit assumption that "the Continent" is monolithic?

What's odd is that James clearly knows a bit about suicide, or at least he does now, because just today he wrote a remarkably sensible article about suicide statistics for the Guardian. So he really ought to know better.

Drug sales are another nice, hard number. Of course, medication rates do not equal illness rates - in any field of medicine, but especially psychiatry. Doctors in some countries may be more willing to use drugs, or patients may be more willing to take them. With that in mind, the fact that population-adjusted (source, also here) British sales of antidepressants drugs are twice those of Ireland and Italy, equal to those of Spain, and half those of France, Norway and Sweden does not necessarily mean very much. But it hardly supports James's theory either.

Interestingly, although James holds up Denmark as an example of the kind of happy, "unselfish capitalism" that we should aspire to, the Danes take 50% more antidepressants than we do! (They also have a much higher suicide rate.) True, sales of anxiety drugs and sleeping pills are relatively high in the UK, but still less than Denmark's. Most interestingly, sales of antipsychotics are very low in the UK - roughly the same as in Germany and Italy but less than a quarter of the sales in Ireland and Finland!

So cheer up, Anglos. We're not twice as sad as the French. More likely, we are just more open about talking our problems in the interests of scientific research. However, the French, to their credit, didn't give the world Oliver James.

[BPSDB]

(*) This is Oliver James, psychologist. Not to be confused with: Oliver James, heartthrob actor; Oliver James, Fleet Foxes song, and Oliver James, Ltd.

ResearchBlogging.orgThe WHO World Mental Health Survey Consortium (2004). Prevalence, Severity, and Unmet Need for Treatment of Mental Disorders in the World Health Organization World Mental Health Surveys JAMA: The Journal of the American Medical Association, 291 (21), 2581-2590 DOI: 10.1001/jama.291.21.2581

Thursday, January 22, 2009

Autism, Testosterone and Eugenics

The media's all too often shabby treatment of neuroscience and psychology research doesn't just propagate bad science - it means that the really interesting and important bits go unreported. This is what's just happened with the controversy surrounding a paper from the Autism Research Center (ARC) at Cambridge University - Bonnie Aeyeung et. al.'s Fetal Testosterone and Autistic Traits. For research published in a journal with an impact factor of 1.538 (i.e. not good), it's certainly attracted plenty of attention - but for all the wrong reasons.


The Autism Research Center is headed by the dashing Simon Baron-Cohen, also one of the authors on the paper. He's probably the world's best-known autism researcher, and the author of some excellent books on the subject including the classic Mindblindness and The Essential Difference. Mindblindness, in particular, probably deserves a lot of the credit for interesting a generation of psychologists in autism. A big cheese, in other words. Surely his greatest achievement, however, is being Borat's cousin.

Baron-Cohen is famous for his theory that the characteristic features of autism are exaggerated versions of the allegedly characteristic features of male, as opposed to female, cognition. Namely, autistic people have difficulties understanding the emotions and behaviour of other people ("empathizing"), but may show excellent rote memory and understanding of abstract, mathematical or mechanical systems ("systematizing"). He and his colleagues have also hypothesised that an excess of the well-known masculinizing hormone testosterone, could be responsible for the hyper-male brains of autistics, just as testosterone is responsible for the development of masculine traits in boys. Amongst other things this would explain why rates of diagnosed autistic spectrum disorders are several times higher in boys than in girls.

Now, this is one of those wide-ranging theories which serves to drive research, rather than strictly following from the evidence. It's a bold idea, but there is, at the moment, not enough data to confirm or reject this idea. The simple view that testosterone = maleness = autism is almost certainly wrong, but it's a neat theory, there's clearly something to it, and, as one of the commentators on the paper puts it
To date, no theory of autism has provided such a connecting thread linking etiology, neuropsychology and neural bases of autism.
Anyway, the paper reports on an association between testosterone levels in the womb and later "autistic traits" in childhood. 235 healthy children were studied; for all of these kids, the levels of testosterone in the womb during pregnancy were known, because their mothers had had amniocentesis, collecting a sample of fluid from the womb. Amniocentesis is not risk-free and it can't be done for research purposes, but the mothers here got amniocentesis for medical reasons and then agreed to take part in research as well. Testosterone levels in the amniotic fluid were measured; notably, this probably represents testosterone produced by the fetus itself, rather than the mother.

The headline finding was that fetal testosterone (fT) levels were correlated with later "autistic traits", as judged by the mothers, who filled out questionaires about their kid's behaviour at the age of about 8. Here's a nice plot showing the correlation. The vertical axis, "AQ-child total", is the parent's total reported score on the "Autism Quotient" questionaire. Higher scores are meant to indicate autism-like traits (although see below). You'll also notice that fT levels are much higher in the boy fetuses than in the girl fetuses - not surprisingly. That's it - a statistically significant association, but there is still a lot of scatter on the plot. The correlation was still significant if the very high-scoring children were ignored. A similar pattern emerged using a different autism rating scale, but was less significant - probably because many scores were very low.
So, this was a perfectly decent study with an interesting result, but it's only a correlation, and not an especially strong one. How did this get written up? New research brings autism screening closer to reality puffed the Guardian's front page! They suggested that measuring fetal testosterone levels might be a way of testing for autism pre-natally, thus sparking off an entirely formulaic debate about the ethics of selective abortion, the usual denunciations of "eugenics", etc. Long story short - Catholics are against it, the National Autistic Society say it's a dilemma, while a family doctor on Comment is Free is unsure about the "test" because she can't read the article: she doesn't have access to the journal.

Lest it be said that the ethical debate is important in itself, even if the details of the testosterone-based screening test might be inaccurate, bear in mind that "testing for autism" is likely to raise unique issues. Are we talking about a test which could distinguish "low-functioning autism" - which can leave children unable to lead anything like a normal life - from "high-functioning autism", sometimes associated with incredible intellectual achievement? Would the test distinguish classical high-functioning autism from Asperger's? When and if a test is developed, these will be crucial questions. You cannot simply speculate about "a test for autism" in the abstract.

Anyway, after a few days of this nonsense Baron-Cohen rightly protested that the paper had nothing to do with prenatal testing, and that such testing isn't on the horizon yet.
The new research was not about autism screening; the new research has not discovered that a high level of testosterone in prenatal tests is an indicator of autism; autism spectrum disorder has not been linked to high levels of testosterone in the womb; and tests (of autism) in the womb do not allow termination of pregnancies.
Most importantly, there were no autistic kids in the study - all of the children were "normal", although some were rated highly on the autism measures. Moreover, as the plot above shows, any testosterone-based screening test would be very inaccurate. Which is why no experts proposed one.

Just like last time. Back in 2007 the Observer (the Sunday version of the Guardian) ran a front-page article about Simon Baron-Cohen's work on the epidemiology of autism. They said that he'd found that autism rates in Britain were "surging"; they probably aren't, and Baron-Cohen's data didn't show that they were, but despite this the Observer took weeks to clarify the issue (for details of the saga, see Bad Science.) In both cases, some important research about autism from Cambridge ended up on the front page of the newspaper, but the debate which followed completely missed the real point. It would have been better for all concerned if the research had never caught the attention of journalists at all.

The actual study in this case is very interesting, as are the three academic commentaries and a response from the authors published alongside it. I can't cover all of the nuances of the debate, but some of the points of interest include: the question of whether the Autism Quotient (AQ) questionaire actually measures autistic behaviours, or just male behaviours; the point that it may be testosterone present in baby boys shortly after birth, not in the womb, which is most important; and the interesting case of children suffering from Congenital Adrenal Hyperplasia, a genetic disorder leading to excessive testosterone levels; Baron-Cohen et. al. suggest that girls with this disorder show some autism-like traits, but this is controversial. Clearly, this is a crucial point.

Overall, while it's too soon to pass judgement on the extreme male brain theory or the testosterone hypothesis, both must be taken seriously. As for autism prenatal testing, I suspect that this will only come when more of the genetic causes of autism are identified. There is no single "gene for autism"; currently a couple of genes responsible for a small % of autism cases are known: CNTNAP2, for example.

Once we have a good understanding of the many genes which can lead to the many different forms of autistic-spectrum disorders, genetic testing for autism will be possible; I doubt that testosterone levels or anything else will serve as a non-genetic marker, because autism almost certainly has many different causes, and many different associated biochemical abnormalities. Maybe I'm wrong, but even so, if you're worried about hypothetical people aborting hypothetical autistic fetuses, you don't have to worry quite yet. Actual children are dying in Zimbabwe - worry about them.

[BPSDB]

ResearchBlogging.orgBonnie Auyeung, Simon Baron-Cohen, Emma Ashwin, Rebecca Knickmeyer, Kevin Taylor, Gerald Hackett (2009). Fetal testosterone and autistic traits British Journal of Psychology, 100 (1), 1-22 DOI: 10.1348/000712608X311731

Autism, Testosterone and Eugenics

The media's all too often shabby treatment of neuroscience and psychology research doesn't just propagate bad science - it means that the really interesting and important bits go unreported. This is what's just happened with the controversy surrounding a paper from the Autism Research Center (ARC) at Cambridge University - Bonnie Aeyeung et. al.'s Fetal Testosterone and Autistic Traits. For research published in a journal with an impact factor of 1.538 (i.e. not good), it's certainly attracted plenty of attention - but for all the wrong reasons.


The Autism Research Center is headed by the dashing Simon Baron-Cohen, also one of the authors on the paper. He's probably the world's best-known autism researcher, and the author of some excellent books on the subject including the classic Mindblindness and The Essential Difference. Mindblindness, in particular, probably deserves a lot of the credit for interesting a generation of psychologists in autism. A big cheese, in other words. Surely his greatest achievement, however, is being Borat's cousin.

Baron-Cohen is famous for his theory that the characteristic features of autism are exaggerated versions of the allegedly characteristic features of male, as opposed to female, cognition. Namely, autistic people have difficulties understanding the emotions and behaviour of other people ("empathizing"), but may show excellent rote memory and understanding of abstract, mathematical or mechanical systems ("systematizing"). He and his colleagues have also hypothesised that an excess of the well-known masculinizing hormone testosterone, could be responsible for the hyper-male brains of autistics, just as testosterone is responsible for the development of masculine traits in boys. Amongst other things this would explain why rates of diagnosed autistic spectrum disorders are several times higher in boys than in girls.

Now, this is one of those wide-ranging theories which serves to drive research, rather than strictly following from the evidence. It's a bold idea, but there is, at the moment, not enough data to confirm or reject this idea. The simple view that testosterone = maleness = autism is almost certainly wrong, but it's a neat theory, there's clearly something to it, and, as one of the commentators on the paper puts it
To date, no theory of autism has provided such a connecting thread linking etiology, neuropsychology and neural bases of autism.
Anyway, the paper reports on an association between testosterone levels in the womb and later "autistic traits" in childhood. 235 healthy children were studied; for all of these kids, the levels of testosterone in the womb during pregnancy were known, because their mothers had had amniocentesis, collecting a sample of fluid from the womb. Amniocentesis is not risk-free and it can't be done for research purposes, but the mothers here got amniocentesis for medical reasons and then agreed to take part in research as well. Testosterone levels in the amniotic fluid were measured; notably, this probably represents testosterone produced by the fetus itself, rather than the mother.

The headline finding was that fetal testosterone (fT) levels were correlated with later "autistic traits", as judged by the mothers, who filled out questionaires about their kid's behaviour at the age of about 8. Here's a nice plot showing the correlation. The vertical axis, "AQ-child total", is the parent's total reported score on the "Autism Quotient" questionaire. Higher scores are meant to indicate autism-like traits (although see below). You'll also notice that fT levels are much higher in the boy fetuses than in the girl fetuses - not surprisingly. That's it - a statistically significant association, but there is still a lot of scatter on the plot. The correlation was still significant if the very high-scoring children were ignored. A similar pattern emerged using a different autism rating scale, but was less significant - probably because many scores were very low.
So, this was a perfectly decent study with an interesting result, but it's only a correlation, and not an especially strong one. How did this get written up? New research brings autism screening closer to reality puffed the Guardian's front page! They suggested that measuring fetal testosterone levels might be a way of testing for autism pre-natally, thus sparking off an entirely formulaic debate about the ethics of selective abortion, the usual denunciations of "eugenics", etc. Long story short - Catholics are against it, the National Autistic Society say it's a dilemma, while a family doctor on Comment is Free is unsure about the "test" because she can't read the article: she doesn't have access to the journal.

Lest it be said that the ethical debate is important in itself, even if the details of the testosterone-based screening test might be inaccurate, bear in mind that "testing for autism" is likely to raise unique issues. Are we talking about a test which could distinguish "low-functioning autism" - which can leave children unable to lead anything like a normal life - from "high-functioning autism", sometimes associated with incredible intellectual achievement? Would the test distinguish classical high-functioning autism from Asperger's? When and if a test is developed, these will be crucial questions. You cannot simply speculate about "a test for autism" in the abstract.

Anyway, after a few days of this nonsense Baron-Cohen rightly protested that the paper had nothing to do with prenatal testing, and that such testing isn't on the horizon yet.
The new research was not about autism screening; the new research has not discovered that a high level of testosterone in prenatal tests is an indicator of autism; autism spectrum disorder has not been linked to high levels of testosterone in the womb; and tests (of autism) in the womb do not allow termination of pregnancies.
Most importantly, there were no autistic kids in the study - all of the children were "normal", although some were rated highly on the autism measures. Moreover, as the plot above shows, any testosterone-based screening test would be very inaccurate. Which is why no experts proposed one.

Just like last time. Back in 2007 the Observer (the Sunday version of the Guardian) ran a front-page article about Simon Baron-Cohen's work on the epidemiology of autism. They said that he'd found that autism rates in Britain were "surging"; they probably aren't, and Baron-Cohen's data didn't show that they were, but despite this the Observer took weeks to clarify the issue (for details of the saga, see Bad Science.) In both cases, some important research about autism from Cambridge ended up on the front page of the newspaper, but the debate which followed completely missed the real point. It would have been better for all concerned if the research had never caught the attention of journalists at all.

The actual study in this case is very interesting, as are the three academic commentaries and a response from the authors published alongside it. I can't cover all of the nuances of the debate, but some of the points of interest include: the question of whether the Autism Quotient (AQ) questionaire actually measures autistic behaviours, or just male behaviours; the point that it may be testosterone present in baby boys shortly after birth, not in the womb, which is most important; and the interesting case of children suffering from Congenital Adrenal Hyperplasia, a genetic disorder leading to excessive testosterone levels; Baron-Cohen et. al. suggest that girls with this disorder show some autism-like traits, but this is controversial. Clearly, this is a crucial point.

Overall, while it's too soon to pass judgement on the extreme male brain theory or the testosterone hypothesis, both must be taken seriously. As for autism prenatal testing, I suspect that this will only come when more of the genetic causes of autism are identified. There is no single "gene for autism"; currently a couple of genes responsible for a small % of autism cases are known: CNTNAP2, for example.

Once we have a good understanding of the many genes which can lead to the many different forms of autistic-spectrum disorders, genetic testing for autism will be possible; I doubt that testosterone levels or anything else will serve as a non-genetic marker, because autism almost certainly has many different causes, and many different associated biochemical abnormalities. Maybe I'm wrong, but even so, if you're worried about hypothetical people aborting hypothetical autistic fetuses, you don't have to worry quite yet. Actual children are dying in Zimbabwe - worry about them.

[BPSDB]

ResearchBlogging.orgBonnie Auyeung, Simon Baron-Cohen, Emma Ashwin, Rebecca Knickmeyer, Kevin Taylor, Gerald Hackett (2009). Fetal testosterone and autistic traits British Journal of Psychology, 100 (1), 1-22 DOI: 10.1348/000712608X311731

Tuesday, January 20, 2009

Prozac and Old Mice

A while back, I wrote about an important paper which cast doubt on the "neurogenesis hypothesis" of antidepressant drug action, which I summarized as
...the proposal that antidepressants work by promoting the survival and proliferation of new neurones in certain areas of the brain - the "neurogenesis hypothesis". Neurogenesis, the birth of new cells from stem cells, occurs in a couple of very specific regions of the adult brain, including the elaborately named subgranular zone (SGZ) of the dentate gyrus (DG) of the hippocampus. Many experiments on animals have shown that chronic stress, and injections of the "stress hormone" corticosterone, can suppress neurogenesis, while a wide range of antidepressants block this effect of stress and promote neurogenesis. (Other evidence shows that antidepressants probably do this by inducing the expression of neurotrophic signalling proteins, like BDNF.)
It's a popular theory at the moment, not least because it's the only real alternative to the older, much-maligned and certainly incomplete monoamine hypothesis of antidepressants. But the neurogenesis hypothesis has problems of its own. A new paper claims to add to what seems like a growing list of counter-examples: Ageing abolishes the effects of fluoxetine on neurogenesis.

The researchers, Couillard-Despres et. al. from the University of Regensburg in Germany, found that fluoxetine (Prozac) enhances hippocampal neurogenesis in mice - as expected - but found in addition that this only holds true in young mice. In middle-aged and older mice, there was no such effect. That's a new finding, and a very important one.

More specifically, the (male) mice were given injections of Prozac for two weeks each. Compared to mice given placebo injections, the mice on Prozac showed
increased survival and the frequency of neuronal marker expression in newly generated cells of the hippocampus in the young adult group (that is 100 days of age) only. No significant effects on neurogenesis could be detected in fluoxetine-treated adult and elderly mice (200 and over 400 days of age).
For mice, 100 days old corresponds to a human age of about 20 years; 200 days is 35 and 400 days is 65 years. The graph here shows the number of BrdU-labelled cells in the dentate gyrus, a measure of neural progenitor cell survival. As you can see, although Prozac robustly increased BrdU+ cell counts in the 100 day old mice, this effect was much less prominent (although perhaps still present a bit?) in the older mice.

It's already well known that hippocampal neurogenesis is age dependent. Young animals (and people) have lots of new neurones being generated, but the rate progressively and inevitably declines with age. This has always been a problem for the simple hypothesis that reduced neurogenesis causes depression, because if that were the case, we'd all be paralyzed by despair by the age of 50. Despite this, it remained plausible that antidepressants worked by increasing neurogenesis, but this new evidence suggests otherwise.

Or does it? What if it turns out that fluoxetine has no antidepressant-like effects in old rodents? In that case, the neurogenesis hypothesis would be supported, not weakened, by this evidence. The author's of the paper don't even consider this possibility, which is a little odd. They do note that antidepressants are effective in older people with depression, but given that this is a paper about mice that's not the same thing. Someone needs to find out whether Prozac has anti-depressant-like effects in the same kind of old mice as those used in this study. If so, the neurogenesis hypothesis will be looking pretty fragile.

This should also serve as a reminder that lab mice are animals, not research robots. They get old, like the rest of us, and research done only on young mice, or male mice, or a certain breed of mice, may not be applicable to others. I have two cats: if you stroke the grey one on the belly, she'll purr contentedly. But if you foolishly assume that the tabby one is the same, you'll get bitten pretty quickly...

ResearchBlogging.orgS Couillard-Despres, C Wuertinger, M Kandasamy, M Caioni, K Stadler, R Aigner, U Bogdahn, L Aigner (2009). Ageing abolishes the effects of fluoxetine on neurogenesis Molecular Psychiatry DOI: 10.1038/mp.2008.147

Prozac and Old Mice

A while back, I wrote about an important paper which cast doubt on the "neurogenesis hypothesis" of antidepressant drug action, which I summarized as
...the proposal that antidepressants work by promoting the survival and proliferation of new neurones in certain areas of the brain - the "neurogenesis hypothesis". Neurogenesis, the birth of new cells from stem cells, occurs in a couple of very specific regions of the adult brain, including the elaborately named subgranular zone (SGZ) of the dentate gyrus (DG) of the hippocampus. Many experiments on animals have shown that chronic stress, and injections of the "stress hormone" corticosterone, can suppress neurogenesis, while a wide range of antidepressants block this effect of stress and promote neurogenesis. (Other evidence shows that antidepressants probably do this by inducing the expression of neurotrophic signalling proteins, like BDNF.)
It's a popular theory at the moment, not least because it's the only real alternative to the older, much-maligned and certainly incomplete monoamine hypothesis of antidepressants. But the neurogenesis hypothesis has problems of its own. A new paper claims to add to what seems like a growing list of counter-examples: Ageing abolishes the effects of fluoxetine on neurogenesis.

The researchers, Couillard-Despres et. al. from the University of Regensburg in Germany, found that fluoxetine (Prozac) enhances hippocampal neurogenesis in mice - as expected - but found in addition that this only holds true in young mice. In middle-aged and older mice, there was no such effect. That's a new finding, and a very important one.

More specifically, the (male) mice were given injections of Prozac for two weeks each. Compared to mice given placebo injections, the mice on Prozac showed
increased survival and the frequency of neuronal marker expression in newly generated cells of the hippocampus in the young adult group (that is 100 days of age) only. No significant effects on neurogenesis could be detected in fluoxetine-treated adult and elderly mice (200 and over 400 days of age).
For mice, 100 days old corresponds to a human age of about 20 years; 200 days is 35 and 400 days is 65 years. The graph here shows the number of BrdU-labelled cells in the dentate gyrus, a measure of neural progenitor cell survival. As you can see, although Prozac robustly increased BrdU+ cell counts in the 100 day old mice, this effect was much less prominent (although perhaps still present a bit?) in the older mice.

It's already well known that hippocampal neurogenesis is age dependent. Young animals (and people) have lots of new neurones being generated, but the rate progressively and inevitably declines with age. This has always been a problem for the simple hypothesis that reduced neurogenesis causes depression, because if that were the case, we'd all be paralyzed by despair by the age of 50. Despite this, it remained plausible that antidepressants worked by increasing neurogenesis, but this new evidence suggests otherwise.

Or does it? What if it turns out that fluoxetine has no antidepressant-like effects in old rodents? In that case, the neurogenesis hypothesis would be supported, not weakened, by this evidence. The author's of the paper don't even consider this possibility, which is a little odd. They do note that antidepressants are effective in older people with depression, but given that this is a paper about mice that's not the same thing. Someone needs to find out whether Prozac has anti-depressant-like effects in the same kind of old mice as those used in this study. If so, the neurogenesis hypothesis will be looking pretty fragile.

This should also serve as a reminder that lab mice are animals, not research robots. They get old, like the rest of us, and research done only on young mice, or male mice, or a certain breed of mice, may not be applicable to others. I have two cats: if you stroke the grey one on the belly, she'll purr contentedly. But if you foolishly assume that the tabby one is the same, you'll get bitten pretty quickly...

ResearchBlogging.orgS Couillard-Despres, C Wuertinger, M Kandasamy, M Caioni, K Stadler, R Aigner, U Bogdahn, L Aigner (2009). Ageing abolishes the effects of fluoxetine on neurogenesis Molecular Psychiatry DOI: 10.1038/mp.2008.147

Sunday, January 18, 2009

Biases, Fallacies and other Distractions

One of the pitfalls of debate is the temptation to indulge in tearing down an opponent's arguments. It's fun, if you're stuck behind a keyboard but still feeling the primal urge to bash something's head in with a rock. Yet if you're interested in the truth about something, the only thing that should concern you is the facts, not the arguments that happen to be made about them.

Plenty has been written about arguments and how they can be bad: sins against good sense are called "fallacies" and there are many lists of them. Some of the more popular fallacies have become household names - ad hominem attacks, the appeal to authority, and everyone's favorite the
straw man argument.

Likewise, cognitive psychologists have done much to name and catalogue the various ways in which our minds can decieve us. Under the blanket name of "biases" many of these are well known - there's confirmation bias, cognitive dissonance, rationalization, and so on.

There's a reason why so much has been said about fallacies and biases. They're out there, and they're a problem. When you set your mind to it, you can find them almost anywhere - no matter who you are. This, for example, is written by someone who believes that HIV does not cause AIDS. By most standards, this makes him a kook. And he probably is a kook, about AIDS, but he’s not stupid. He makes some perfectly sensible points about cognitive dissonance and the psychology of science. And here, he offers further words of wisdom:
I have no satisfactory answer to offer, unfortunately, for how AIDStruthers could be brought to useful mutual discussion.
...
Here’s a criterion for whether a discussion is genuinely substantive or not, directed at clarification and increased understanding: no personal comments adorn the to-and-fro. If B appears not to understand what A is saying, then A looks for other ways of presenting the case, A doesn’t simply keep repeating the same assertions spiced with “Why can’t you…?”, and the like. [Added 28 December: Another hallmark of the non-substantive comments is that the commentator not only keeps harping on the same thing but does so by return e-mail, leaving no time to consider what s/he is replying to; see Burun's admission of suffering from that failing.]
...
One lesson from experience is that the aim of Rethinkers cannot be to convince the AIDStruthers. It soon becomes a sheer waste of time to attempt to argue substance with them; a waste of time because you can’t learn anything from them, and they are incapable of learning anything from you. Rethinkers and Skeptics should address the bystanders, onlookers, the unengaged “silent majority”. There seem always to be with us some people who cheerfully continue to believe that the Earth is only about 6,000-10,000 years old, and many other things that most of us judge to be utterly disproved by factual evidence.
That could have come straight from the pen of such pillars of scientific respectability as Carl Sagan or Orac - until you remember that by "Rethinkers" and "Skeptics" he means people who don't believe that HIV causes AIDS, while "AIDStruthers" is his term for those who do, that is, almost every medical and scientific professional.

The lesson here is that you don't have to be right in order to notice that people who disagree with you are irrational, or that much of the opposition to your belief is dogmatic. The sad fact is that stubborness and a tendency to dogmatism are a part of human nature and it's very hard to escape from them; likewise, it's very hard to make a complex argument without saying something at least technically fallacious (that witty aside? Ad hominem attack!)

The point is that none of this matters. If something is true, then it's true even if everyone who believes it is a dogmatic maniac. So it's certainly true even if the only people you know who believe it are idiots. What's the chance that you've argued with the smartest Christian ever, or the best informed opponent of homeopathy? In which case - the fallacies and biases of the people you have argued with certainly don't matter. In an argument, the only thing of importance is what the facts are, and the way to find out is to look at the evidence.

If you're taking the time to name and shame the fallacies in someone's reasoning or to diagnose their biases, then you're not talking about the evidence - you're talking about your opponent(s). Why are you so fascinated by him...? To spend time lamenting the irrationality of your opponents is unhealthy. The only people who have a reason to care about other people’s fallacies and biases are psychologists. Daniel Kahneman got half a Nobel Prize for his work on cognitive biases - it's his thing. But if your thing is HIV/AIDS, or evolution, or vaccines and autism, or whatever, then it's far from clear that you have any legitimate interest in your opponent's flaws. In all likelihood, they are no more flawed than anyone else - or even if they are, their real problem is not that they're making ad hominem attacks (or whatever), but that they're wrong.

So when barely-coherent columnist Peter Hitchens writes in the Daily Mail about wind farms

If visitors from another galaxy really are going round destroying wind turbines, then it is the proof we have been waiting for that aliens are more intelligent than we are.

The swivel-eyed, intolerant cult, which endlessly shrieks – without proof – that global warming is man-made, has produced many sad effects.

The point is not that people who believe that global warming is man made are not a cult. They're not, but even if they were, it wouldn't matter. The swiveliness of their eyes or the pitch of their voice is not obviously relevant either.

Of course, if you're out to have fun bashing heads, or writing columns for the Daily Mail, then go ahead. Learn the names of as many fallacies and biases as you can (including the Latin names if possible - that's always extra impressive) and go nuts. But if you're serious about establishing or discussing the truth about something, then there is only one set of biases and fallacies you ought to care about – your own.

[BPSDB]

Biases, Fallacies and other Distractions

One of the pitfalls of debate is the temptation to indulge in tearing down an opponent's arguments. It's fun, if you're stuck behind a keyboard but still feeling the primal urge to bash something's head in with a rock. Yet if you're interested in the truth about something, the only thing that should concern you is the facts, not the arguments that happen to be made about them.

Plenty has been written about arguments and how they can be bad: sins against good sense are called "fallacies" and there are many lists of them. Some of the more popular fallacies have become household names - ad hominem attacks, the appeal to authority, and everyone's favorite the
straw man argument.

Likewise, cognitive psychologists have done much to name and catalogue the various ways in which our minds can decieve us. Under the blanket name of "biases" many of these are well known - there's confirmation bias, cognitive dissonance, rationalization, and so on.

There's a reason why so much has been said about fallacies and biases. They're out there, and they're a problem. When you set your mind to it, you can find them almost anywhere - no matter who you are. This, for example, is written by someone who believes that HIV does not cause AIDS. By most standards, this makes him a kook. And he probably is a kook, about AIDS, but he’s not stupid. He makes some perfectly sensible points about cognitive dissonance and the psychology of science. And here, he offers further words of wisdom:
I have no satisfactory answer to offer, unfortunately, for how AIDStruthers could be brought to useful mutual discussion.
...
Here’s a criterion for whether a discussion is genuinely substantive or not, directed at clarification and increased understanding: no personal comments adorn the to-and-fro. If B appears not to understand what A is saying, then A looks for other ways of presenting the case, A doesn’t simply keep repeating the same assertions spiced with “Why can’t you…?”, and the like. [Added 28 December: Another hallmark of the non-substantive comments is that the commentator not only keeps harping on the same thing but does so by return e-mail, leaving no time to consider what s/he is replying to; see Burun's admission of suffering from that failing.]
...
One lesson from experience is that the aim of Rethinkers cannot be to convince the AIDStruthers. It soon becomes a sheer waste of time to attempt to argue substance with them; a waste of time because you can’t learn anything from them, and they are incapable of learning anything from you. Rethinkers and Skeptics should address the bystanders, onlookers, the unengaged “silent majority”. There seem always to be with us some people who cheerfully continue to believe that the Earth is only about 6,000-10,000 years old, and many other things that most of us judge to be utterly disproved by factual evidence.
That could have come straight from the pen of such pillars of scientific respectability as Carl Sagan or Orac - until you remember that by "Rethinkers" and "Skeptics" he means people who don't believe that HIV causes AIDS, while "AIDStruthers" is his term for those who do, that is, almost every medical and scientific professional.

The lesson here is that you don't have to be right in order to notice that people who disagree with you are irrational, or that much of the opposition to your belief is dogmatic. The sad fact is that stubborness and a tendency to dogmatism are a part of human nature and it's very hard to escape from them; likewise, it's very hard to make a complex argument without saying something at least technically fallacious (that witty aside? Ad hominem attack!)

The point is that none of this matters. If something is true, then it's true even if everyone who believes it is a dogmatic maniac. So it's certainly true even if the only people you know who believe it are idiots. What's the chance that you've argued with the smartest Christian ever, or the best informed opponent of homeopathy? In which case - the fallacies and biases of the people you have argued with certainly don't matter. In an argument, the only thing of importance is what the facts are, and the way to find out is to look at the evidence.

If you're taking the time to name and shame the fallacies in someone's reasoning or to diagnose their biases, then you're not talking about the evidence - you're talking about your opponent(s). Why are you so fascinated by him...? To spend time lamenting the irrationality of your opponents is unhealthy. The only people who have a reason to care about other people’s fallacies and biases are psychologists. Daniel Kahneman got half a Nobel Prize for his work on cognitive biases - it's his thing. But if your thing is HIV/AIDS, or evolution, or vaccines and autism, or whatever, then it's far from clear that you have any legitimate interest in your opponent's flaws. In all likelihood, they are no more flawed than anyone else - or even if they are, their real problem is not that they're making ad hominem attacks (or whatever), but that they're wrong.

So when barely-coherent columnist Peter Hitchens writes in the Daily Mail about wind farms

If visitors from another galaxy really are going round destroying wind turbines, then it is the proof we have been waiting for that aliens are more intelligent than we are.

The swivel-eyed, intolerant cult, which endlessly shrieks – without proof – that global warming is man-made, has produced many sad effects.

The point is not that people who believe that global warming is man made are not a cult. They're not, but even if they were, it wouldn't matter. The swiveliness of their eyes or the pitch of their voice is not obviously relevant either.

Of course, if you're out to have fun bashing heads, or writing columns for the Daily Mail, then go ahead. Learn the names of as many fallacies and biases as you can (including the Latin names if possible - that's always extra impressive) and go nuts. But if you're serious about establishing or discussing the truth about something, then there is only one set of biases and fallacies you ought to care about – your own.

[BPSDB]