Thursday, April 28, 2011

The Schizophrenic Computer

All over the world, inanimate objects are getting schizophrenia. Last week, it was a dish (full of neurons).

Before that, it was a computer program. That's according to a paper, which appeared in Biological Psychiatry last month, although it involved no biology, called Using Computational Patients to Evaluate Illness Mechanisms in Schizophrenia.

The authors set up a neural network model, called DISCERN, and trained it to "read" stories. The nuts and bolts are, we're reassured, not something that readers of Biological Psychiatry need to worry about: "Its details, many of which are not essential in understanding this study..."

Anyway, it's basically a series of connectionist models. These are computer simulations of a large number of simple units, or nodes, which can have "activations" of varying strengths, and which have "connections" to other nodes. The model "learns" by modifying the strength of these connections according to some kind of simple learning rule.

Connectionist models are a bit like brains, in other words. A bit. They're several orders of magnitude simpler than a real brain, in several different respects. Still, they can "learn" to do some quite complicated things. You can train them to recognise faces and stuff, which is not trivial.


Anyway, DISCERN is a connectionist model of language, but it's not necessary a model of how the human brain actually learns language. Because we just have no idea how the human brain does that. We don't even know if our brain acts as a connectionist network at all, above the cellular level. Some cognitive scientists think it is, but others think that those guys are talking out of an orifice connected to their mouth, but not their mouth. Not in so many words you understand.

So they set up this system and got it to learn 28 stories, each of which consisted of multiple sentences. Some of the stories were the autobiography of a doctor - "I was a doctor. I worked in New York. I liked my job. I was good doctor" - he was not a great communicator, clearly. Others were a story about gangster ("Tony was a gangster. Tony worked in Chicago..." etc.) The network had to read these stories and then recall them.

The core of the study was that they tested to see what happened when they interfered with the program by introducing certain bugs - interfering with the activations or connections of nodes in particular parts of the model. They tried 8.

They compared the computer's performance to that of 37 actual patients with schizophrenia (or the related schizoaffective disorder) who were tested on a similar task, compared to 20 healthy controls. When the human patients came to recall the stories they'd read, they tended to make more errors of particular kinds: mixing up who did what ("agent switching"), and adding stuff that wasn't in the story ("derailment").

What they found was that DISCERN made the same kinds of errors when it was given 2 particular deficits, "working memory disconnection" and "hyperlearning". The other 6 deficits didn't cause the same pattern of findings. Hyperlearning was the best match.

They comment that
A majority of three-parameter best-fit hyperlearning simulations also recurrently confused specific agents in personal stories (including the self-representation) with specific agents in crime stories (and vice versa) in a highly nonrandom fashion.

Noteworthy was the high frequency of agent-slotting exchanges between the hospital boss, Joe, and the Mafia boss, Vito, and parallel confusions between the “I” self-reference and underling Mafia members, suggesting generalization of boss/underling relationships.

Insofar as story scripts provide templates for assigning intentions to agents, a consequence of recurrent agent-slotting confusions could be assignment of intentions and roles to autobiographical characters (possibly including the self) that borrow from impersonal stories derived from culture or the media.

Confusion between agent representations in autobiographical stories and those in culturally determined narratives could account for the bizarreness of fixed, self-referential delusions, e.g., a patient insisting that her father-in-law is Saddam Hussein or that she herself is the Virgin Mary.
So if you believe it, they've just made a program that experiences schizophrenic-type paranoid delusions.

It's fair to say that this is speculative. On the other hand, it's an interesting approach, and at least it's theory-based, rather than just an attempt to use ever more powerful genetic, neuroimaging and biological techniques to find differences between a patient group and a control group.

ResearchBlogging.orgHoffman RE, Grasemann U, Gueorguieva R, Quinlan D, Lane D, & Miikkulainen R (2011). Using computational patients to evaluate illness mechanisms in schizophrenia. Biological psychiatry, 69 (10), 997-1005 PMID: 21397213

No comments:

Post a Comment