In 1997 IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov, 3½ games to 2½, in a 6-game series. This quintessential triumph of machine over man is often called the most spectacular event in chess history. As one of chess’s greatest minds fell to a network of circuits and transistors, the very line between brain and computer was blurred. And one could only imagine that, in the future, this distinction might disappear completely.
But we’re certainly not there yet. The basal ganglia, a group of nuclei located in the vertebrate forebrain, work in concert to regulate a remarkable variety of important functions, from eye movement and procedural learning to higher-order functions like emotion and cognition—more than Deep Blue could ever achieve. Imbalance of neurotransmitter levels within these pathways leads to motor disorders like Parkinson’s and Huntington’s, and is even associated with neuropsychiatric disorders like schizophrenia. The importance of this group of nuclei is far-reaching, and demonstrates that, despite what sensationalized chess matches may suggest, the brain is still the world’s most powerful computer.
I decided to give the brain a run for its money, however, and built a C++ program to model the basal ganglia.The basal ganglia motor loop begins with the release of dopamine by the Substantia Nigra. This results in the excitation of the striatum, and after that excitation is channeled through each of the basal ganglia’s nuclei in turn. Some nuclei release glutamate, the brain’s standard excitatory neurotransmitter, thus exciting the next nucleus in the loop. Others release GABA, glutamate’s inhibitory counterpart. To model this in my program, I begin with the excitation of the striatum, and then pass a running excitation value through each of the nuclei in the motor loop. In each nucleus, excitation is multiplied by the nucleus’s excitation coefficient, which ranges from 1 (excitatory), to -1 (inhibitory). By default, these coefficients are set to either 1 or -1, but the user can alter them, to simulate stimulation or ablation of that nucleus.
For example, the excitation within the MGP equals 1*the excitation of the STA. The excitation within the VA/VL = (-1)*the excitation of the Medial GP. Changing the excitation coefficient of the STA from 1 to .25 would decrease the excitation of the STA, simulating a lesion. Changing the MGP’s coefficient from -1 to -1.5 would increase the MGP’s inhibition, simulating stimulation.
As an experiment, I randomly chose two nuclei—the MGP and the STA—and simultaneously varied both of their excitation coefficients from their values to zero. Thus the MGP’s inhibition ranged from -1 to 0 and the STA’s excitation ranged from 0 to 1. I then ran the program through the entire motor loop for 1, 5, and 100 iterations, and plotted the final cortical output as dependent on the coefficients of both nuclei. The results were pretty interesting, if I may say so myself.
After 1 iteration, the surface is relatively planar. After 5, however, the extent of the difference between zero and nonzero excitation is greater. Net cortical output increases exponentially as STA excitation approaches 1 and MGP excitation approaches -1. After 100 iterations, this effect is greatly amplified. Increase in cortical output starts slow, and then suddenly spikes at excitation = ± 0.95. This demonstrates the sensitivity of neural networks; the actual brain probably executes hundreds of iterations of this loop per second, and the greater the number of iterations, the greater the vulnerability to extraneous values. In order for excitation after 100 loops to be the same as it was after 1 (with default excitation), excitation of each nucleus must equal about ± 0.65, and it’s clear variation in this value would have drastic effects. It’s a wonder that brain disorders, motor and otherwise, aren’t more common.
However interesting these results are, it’s obvious that my program is far too simple to even closely match the real deal. A proper model would require consideration not only of the general effect of each nucleus, but also of each neuron within each nucleus, and each neuron that synapses with those neurons, and so on. I argue, however, that the nature of my failure is of degree, not of type. I may fall short of the mark, but I’m still aiming for the right target. The only thing preventing the construction of a fully simulated brain is time and resources, of which I would require nearly infinite and have quite little. Achievement of this goal would be difficult, but not impossible. Deep Blue proved that man can be beaten at chess. When a computer can equal man at every aspect of thought, the goal of artificial intelligence will be achieved.
I’ll end by touching upon a classic question in philosophy: if the brain is, after all, a sophisticated computer, with only fixed and unchanging mechanics between input (stimulation) and output (thought), does free will even exist? Is the decision-making process simply an illusion, where, in reality, our ‘brain’ makes the decision for us? My response is that the choice between free will and determinism is a false dilemma, which artificially assumes that the two are mutually exclusive. Yes, the brain is a sophisticated computer, and yes, our thoughts and actions may be predetermined by our accumulated experience. However, we are our brains. We experience whatever computation processes occur during thought as free will. While the cogs and gears of our brains are turning, in this perhaps predetermined fashion, we are making that difficult decision or searching our soul for an answer. Choice may seem like falsehood from an outside perspective, but from in here, from the vantage point of my brain, choice is very real, no matter the extent of its predetermination.