Blog

Turing's Brain

  • Published28 Nov 2014
  • Author Dwayne Godwin
  • Source BrainFacts/SfN

As a new biography of Alan Turing hits the big screen, it’s worth remembering the foundational role Turing played in artificial intelligence and his contribution to the idea of how brains learn.

Alan Turing is perhaps best known as the father of modern computing. His work transitioned us from human computations (a “computer” at the time was an actual person who solved specific types of problems, often using pencil and paper) to machine-based methods of taking on complex and intensive algorithms, leading eventually to the smartphone or computer on which you may be reading this.

The group assembled by the British government at Bletchley Park was responsible for breaking codes used in top secret communications by the Nazis during World War II, including the codes used by the Enigma and Lorenz ciphers. Not to spoil the movie, but if you know your history you know they were successful. Numerous pieces of intelligence were instrumental in disrupting specific key operations of the Axis powers. As just one example of the importance of the effort, the Allies knew the position of all but 2 of Germany’s 58 Western divisions just prior to the D-Day invasion in 1944. In fact, the code breakers at Bletchley Park are credited with ending the war years earlier than it might have otherwise ended.

As Turing’s triumphs and tragic end are remembered, it’s also worth exploring a lesser-known contribution Turing made – to neuroscience.

Turing’s influence in this area is subtle, and related to his interest in machine learning. It’s one thing to make a computer that can perform rote operations. It’s quite another to devise a computer that can adapt its operations in response to novel inputs to implement a machine form of learning.

Turing wrote an influential essay in 1947 that outlined his ideas on learning machines. Of particular note is the notion of “unorganized” machines, which could modify their own state in response to different types of inputs. The idea of learning machines foreshadowed the types of learning that occur in the brain. In fact, in the paper Turing himself draws an analogy with the cerebral cortex and suggests that the computational problem is very similar:

“The difference between the languages spoken on the two sides of the Channel is not due to difference in the development of the French-speaking and English-speaking parts of the brain. It is due to the linguistic parts having been subjected to different training. We believe then that there are large parts of the brain, chiefly in the cortex, whose function is largely indeterminate. In the infant these parts do not have much effect: the effect they have is uncoordinated. In the adult they have great and purposive effect: the form of this effect depends on training in childhood. A large remnant of the random behavior of infancy remains in the adult.” [p.16].

Here, Turing not only discusses the concept of what we now call “equipotentiality” of the cerebral cortex, he also seems to intuit the role of development and learning in constructing a working brain. In Turing's view, an artificial intelligence isn't so much made, but grown. Turing goes on to write that he’d very much like to explore these concepts, but the methods for doing so were still paper-based at the time. It would be many years before computers equal to the task would be readily available.

One of the next steps on the road to artificial neurons was the work of Warren McCulloch and Walter Pitts. McCulloch and Pitts’ essential contribution was some of the first formal models of Boolean, neuron-like elements. However, unlike Turing’s unorganized machines, these simple computing elements did not possess modifiable synapses or changing firing thresholds that we now know to be present in real neurons, and from that perspective they did not capture some of the more exciting features of Turing’s models.

McCulloch credited an earlier paper by Turing on their approach to neural networks, later noting:

“…it was not until I saw Turing’s paper [on computable numbers] that I began to get going the right way around, and with Pitts’ help formulated the required logical calculus. What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine.”

Donald Hebb’s contribution to our modern concept of learning and memory can be encapsulated by the often-used phrase, “cells that fire together, wire together”. However, the link between Turing’s ideas and those of Hebb is more tortuous. While Hebb credits McCulloch and others with antecedent formalisms in his seminal description of his rule (providing at least an indirect path to Turing), it may be that Hebb was unaware of Turing’s early ideas on self-organizing machines laid out in Turing’s 1947 paper when he devised his learning rule. It’s also true that Turing didn’t publish or promote himself at a rate equal to his genius.

Hebb’s success was not only the rule - it was in describing this rule, not in the realm of machines and their operations, but in the world of meat. For the first time, there was an organic repository for the critical modification needed to form memories:

“The assumption can be precisely stated as follows: when an axon of cell A is near enough to excite a cell B and repeatedly and persistently takes part in firing it, some growth process of metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” (Hebb, 1949).

It’s interesting to read Hebb’s ideas about how this would occur, which involved physical changes to the synapse. At the time, the position of glutamate as the predominant neurotransmitter in the brain was unknown. It was only much later that Bliss and LØmo (1973) provided an embodiment of Hebb’s concept in the form of long term potentiation, one of the best models we have of how synapses are persistently modified during learning.

Turing’s life and accomplishments remind us that key insights often cannot be predicted in advance, and that basic scientific knowledge is essential to uncovering novel ideas with incalculable practical value. No one could have predicted the value of Turing’s accomplishments to our modern way of life.

It’s impossible to know exactly how much the idea of long term potentiation is owed to Turing. But it’s clear that the basic notion of learning machines contributed to a scientific focus on plasticity and learning in the brain.  I like to think that - had he lived longer - Turing would have done more work in this area and would have made many more important contributions to the field of computational neuroscience and artificial intelligence.



The Society for Neuroscience and its partners are not responsible for the opinions and information posted on this page. Terms & conditions.

CONTENT PROVIDED BY

BrainFacts/SfN

Brain Facts Blog

Read more expert opinions on today's hot topics in our blog series

Browse

Brain Awareness Video Contest

Submit a short video about any neuroscience topic for a chance to win $4,000 and a trip to SfN's Annual Meeting!

Learn More

Image of the Week

Check out the Image of the Week Archive.

Explore