Man vs. Machine

In 1997 IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov, 3½ games to 2½, in a 6-game series.  This quintessential triumph of machine over man is often called the most spectacular event in chess history.  As one of chess’s greatest minds fell to a network of circuits and transistors, the very line between brain and computer was blurred.  And one could only imagine that, in the future, this distinction might disappear completely.


Basal Ganglia Diagram

But we’re certainly not there yet.  The basal ganglia, a group of nuclei located in the vertebrate forebrain, work in concert to regulate a remarkable variety of important functions, from eye movement and procedural learning to higher-order functions like emotion and cognition—more than Deep Blue could ever achieve.  Imbalance of neurotransmitter levels within these pathways leads to motor disorders like Parkinson’s and Huntington’s, and is even associated with neuropsychiatric disorders like schizophrenia.  The importance of this group of nuclei is far-reaching, and demonstrates that, despite what sensationalized chess matches may suggest, the brain is still the world’s most powerful computer.

I decided to give the brain a run for its money, however, and built a C++ program to model the basal ganglia.The basal ganglia motor loop begins with the release of dopamine by the Substantia Nigra.  This results in the excitation of the striatum, and after that excitation is channeled through each of the basal ganglia’s nuclei in turn.  Some nuclei release glutamate, the brain’s standard excitatory neurotransmitter, thus exciting the next nucleus in the loop.  Others release GABA, glutamate’s inhibitory counterpart.  To model this in my program, I begin with the excitation of the striatum, and then pass a running excitation value through each of the nuclei in the motor loop.  In each nucleus, excitation is multiplied by the nucleus’s excitation coefficient, which ranges from 1 (excitatory), to -1 (inhibitory).  By default, these coefficients are set to either 1 or -1, but the user can alter them, to simulate stimulation or ablation of that nucleus.

For example, the excitation within the MGP equals 1*the excitation of the STA.  The excitation within the VA/VL = (-1)*the excitation of the Medial GP.  Changing the excitation coefficient of the STA from 1 to .25 would decrease the excitation of the STA, simulating a lesion.  Changing the MGP’s coefficient from -1 to -1.5 would increase the MGP’s inhibition, simulating stimulation.

As an experiment, I randomly chose two nuclei—the MGP and the STA—and simultaneously varied both of their excitation coefficients from their values to zero.  Thus the MGP’s inhibition ranged from -1 to 0 and the STA’s excitation ranged from 0 to 1.  I then ran the program through the entire motor loop for 1, 5, and 100 iterations, and plotted the final cortical output as dependent on the coefficients of both nuclei.  The results were pretty interesting, if I may say so myself.

1 iteration

5 iteration

100 iterations

After 1 iteration, the surface is relatively planar.  After 5, however, the extent of the difference between zero and nonzero excitation is greater.  Net cortical output increases exponentially as STA excitation approaches 1 and MGP excitation approaches -1.  After 100 iterations, this effect is greatly amplified.  Increase in cortical output starts slow, and then suddenly spikes at excitation = ± 0.95.  This demonstrates the sensitivity of neural networks; the actual brain probably executes hundreds of iterations of this loop per second, and the greater the number of iterations, the greater the vulnerability to extraneous values.  In order for excitation after 100 loops to be the same as it was after 1 (with default excitation), excitation of each nucleus must equal about  ± 0.65, and it’s clear variation in this value would have drastic effects. It’s a wonder that brain disorders, motor and otherwise, aren’t more common.

However interesting these results are, it’s obvious that my program is far too simple to even closely match the real deal.  A proper model would require consideration not only of the general effect of each nucleus, but also of each neuron within each nucleus, and each neuron that synapses with those neurons, and so on.  I argue, however, that the nature of my failure is of degree, not of type.  I may fall short of the mark, but I’m still aiming for the right target.  The only thing preventing the construction of a fully simulated brain is time and resources, of which I would require nearly infinite and have quite little.  Achievement of this goal would be difficult, but not impossible.  Deep Blue proved that man can be beaten at chess.  When a computer can equal man at every aspect of thought, the goal of artificial intelligence will be achieved.

I’ll end by touching upon a classic question in philosophy: if the brain is, after all, a sophisticated computer, with only fixed and unchanging mechanics between input (stimulation) and output (thought), does free will even exist?  Is the decision-making process simply an illusion, where, in reality, our ‘brain’ makes the decision for us?  My response is that the choice between free will and determinism is a false dilemma, which artificially assumes that the two are mutually exclusive.  Yes, the brain is a sophisticated computer, and yes, our thoughts and actions may be predetermined by our accumulated experience.  However, we are our brains.  We experience whatever computation processes occur during thought as free will.  While the cogs and gears of our brains are turning, in this perhaps predetermined fashion, we are making that difficult decision or searching our soul for an answer.  Choice may seem like falsehood from an outside perspective, but from in here, from the vantage point of my brain, choice is very real, no matter the extent of its predetermination.


3 comments on “Man vs. Machine

  1. Ben says:

    The idea of compatibility between determinism and free will is called “compatabilism” and is fairly well-discussed; the idea that consciousness and brain function can be explained through purely physical phenomena is called “materialism”. Both are common topics in philosophy of mind.

    It might be possible for a pure materialist to still believe that brains cannot be modeled by a computer. Let me explain. That the brain depends entirely on physical properties should not necessarily imply that computers depend on the same set of physical properties; more specifically, it’s possible that the brain depends on a wider set of physical properties than those accessible to computers. Bear with me here: if theoretical physicists routinely argue for the existence of 4, 8, or even 11 dimensions, and if we believe them, we might wonder whether, while purely physical, brains depend on extra phenomena (higher dimensions, quantum mechanics, etc.) that still have not been incorporated into three-dimensional, transistor-based computers.

    Addressing the problem through theory of computation offers further perspective. Some problems, believe it or not, have been proven to be impossible to solve using a computer — including, we may note, problems which our brains can solve. In what’s famously known as the “Halting Problem”, Alan Turing in 1936 proved that (roughly) no computer program can ever solve the problem of determining whether a given program runs forever or not; i.e. that the halting problem is undecidable. What are the implications of this idea that some problems are unsolvable by computers? Possibly, the same as those of the above paragraph: that the model of computation (Turing Machine) on which the undecidability proof relies is limited to a narrower set of characteristics than those which our brains enjoy. A better understanding of Turing machines and the Halting Problem might further clarify this in the future.

  2. Richard says:

    The statement at the end of the second paragraph is only interesting if we already believe that the brain is a computer in the conventional sense of a computer as a Turing machine. Some of the material on the specifics of the workings of the brain might be dealt with more profitably if I had a more sophisticated grasp of neuroscience. However, Josh’s remark about a comprehensive computational modeling of the workings of the human brain being merely a matter of unavailable resources and so an engineering problem of degree and not of type is a little question begging with respect to the points touched upon by Ben’s comment. There is an extensive literature in philosophy about the significance of various results in early computational mathematics and mathematical logic for our understanding of the human mind/brain. One notable contributor to this area of discussion in recent times is the Oxford philosopher J. Lucas. Lucas was the first philosopher to give an extensive and sophisticated treatment of the idea that certain limiting results established in the 20th century concerning the nature of formal and mechanical proof and their relation to machines may entail that the human mind or brain is not a Turing machine. Although Lucas himself endorses this view, there are plenty of philosophers who challenge it on conceptual grounds. Among them are the noted philosopher David Lewis and more recently Timothy Williamson. Although the jury is out on this issue I think the consensus view is that it is much too hasty to say that the brain, something which gives rise to the human mind, is a Turing machine. So Josh’s claim about the problem being degree rather than type maybe a bit premature.

    I suspect our picture of the physical universe is not yet sophisticated enough to lend any serious scientific credibility to the speculation that higher dimensions are being exploited by the human brain which are not exploited by normal computers. It may well be the case, however at the moment I expect it is merely an interesting idea. It is fairly safe to say that the computers we use do not share the same underlying set of physical properties. There is also a debate about hardware versus wet ware when it comes to irreconcilable differences between the organic brain of a human and the mechanical brain of a computer. Also even if we were to except Josh’s claim that a simulated brain or mind which behaved just like a real brain or mind would be a real brain or mind, that is an intelligent machine, then we would nonetheless have to face the extant arguments, notably by John Searle, that passing the Turing test is not sufficient for being intelligent. Josh’s point about free will seems a little confused. He seems to be arguing or gesturing towards a compatibalist position but ends up saying that free will is more or less an illusion.

    • Josh says:

      In retrospect, I certainly was confused. A few brief points, though:

      I think we can remain agnostic regarding Searle’s arguments and questions of intelligence, for now. It still merits asking whether or not we could build a computer that would pass a Turing test. This would certainly be an important achievement, and we could address the Chinese room issue once we come to it.

      Now, towards building this computer. You argue that it’s “fairly safe to say that the computers we use do not share the same underlying set of physical properties”. I’d consider this alone to be a big claim, and I’d be curious to hear more about your reasoning. But I think, in retrospect, that this was really the question I was getting at. Do brains and computers use the same tools (perhaps the same hardware, if not software)? If so, great. Ostensibly, building a “brain” presents a practical problem, albeit a huge one, but not a theoretical question. (Aside: for a problem this vast, the distinction between practical and theoretical might become meaningless). If not, though, the interesting question becomes: what is the brain made out of? Quantum…stuff? Higher dimensions? You mention that musings like these lack credibility given our limited understanding of the universe. But then what can we say about the brain’s makeup? If we’re able to say that it’s made of “more than computers”, we should be able to say what that “more” is.

      Again…4 years later, still confused. Appreciate your input though.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s