The Geometry of Behavior

What do a swinging pendulum, an ecosystem, and a firing neuron all have in common?  They can all be modeled by astoundingly similar equations.  These equations comprise a rich and fascinating area of mathematics called dynamics.

A classic real-life application of dynamics is one many of us learned in high school biology: the competition between two species.

Figure 1: Predator-prey dynamics over 20 years. (1)

Both species rise and fall in concert; at any given time each species exerts influence on the other, producing the pattern seen.

Interestingly, the populations of both species can be plotted together in a single plane, in what’s known as the phase space.  Here, the time variable is no longer one of the axes.  Instead, picture the “now” as a dot that moves continuously along the curved trajectory shown below.  Depending on the time, the dot will be in a different place, corresponding to a different population size of gazelles and lions.

Predator Prey Limit Cycle

Figure 2: Limit cycle corresponding to predator-prey dynamics

Let’s discuss what’s happening at each of the four points shown on the graph.

  1. The populations of both species are high.  The lions are happy, since there are plenty of gazelles to eat.  Unfortunately for the gazelles, though, the population of lions is high.  The number of gazelles decreases rapidly, and the lions thrive.
  2. There are still quite a few lions, but now only a few gazelles.  So, more and more lions starve, and the lion population dwindles.
  3. Both populations are low.  With only a few lions, the gazelles are free to eat, mate and raise their young in peace.  The gazelle population rises.
  4. The lion population is still low.  However, with more and more gazelles on the Savannah, the lions enjoy abundant food.  The lion population begins to rise.

And the cycle repeats.

The above limit cycle can be modeled by the Lotka–Volterra system (2) of differential equations.  Where dG/dt is the rate of change of the gazelle population and dL/dt is the rate of change of the lion population:

dG/dt = αG – βLG

dL/dt = γLG – δL (2)

In the first line, the rate of change of the gazelle population correlates positively with the number of gazelles, G, times some factor corresponding to their rate of reproduction, α.  Gazelle growth rate also correlates negatively with βGL, corresponding to the rate at which gazelles meet (and subsequently feed) the lions.

In the second line, the rate of growth of the lion population now correlates positively with the rate at which lions and gazelles meet.  Further, it correlates negatively with the number of lions L, times some factor δ, corresponding to their rate of death, which is, in this case, due to natural causes or disease.

Note that this model assumes that the gazelles have infinite food and only one predator; meanwhile, the lions eat only gazelles and have no predators.  As a consequence of this, if there were only gazelles, their population would grow exponentially (like y = ex).  Similarly, if there were only lions, their population would decay exponentially (like y = e–x).

I mentioned that a swinging pendulum can also be modeled by dynamical systems.  I challenge the reader to plot the distance from the center-point (aka the position), as well as the velocity, of an indefinitely-swinging pendulum as a function of time, similarly to Figure 1.  Now, plot the relationship between position (on the x-axis) and velocity (on the y-axis), ala Figure 2.  What if friction is taken into account?  If you’re stumped, I’d be happy to explain in the comments!  Also see Dynamics: the Excitability of Behavior by Abraham and Shaw. (3)

What does all this have to do with neuroscience?  Well, a spiking neuron demonstrates repeated oscillation, quite like the repeated spikes of the gazelle population in Figure 1! The equations become a bit more complicated, but the concept is the same.

Figure 3: Step-by-step development of the Fitzhugh-Nagumo model. (4) Plotted in MATLAB

Figure 3: Step-by-step development of the Fitzhugh-Nagumo model. (4) Plotted in MATLAB

Let’s reconstruct the firing of a neuron, starting from the basics.  Figure 3, graph A shows the following equation:

dV/dt = V

where V is voltage.  How does this equation produce graph A?  Well, at any given time, the rate of the change in voltage is equal to the voltage itself.  This means that, as long as voltage is positive, so is the change in voltage, meaning that voltage increases.  Furthermore, the greater voltage gets, the faster it increases, causing it to get even greater.  This system builds on itself and ultimately grows exponentially, much like the behavior of the gazelles with no lions.

Well, this certainly isn’t what we want, since the voltage of a spiking neuron doesn’t grow exponentially.  So, let’s add another term in graph B:

dV/dt = V – V3

If V is small, then V3 is even smaller.  So, for very low values of V (close to 0), graph B behaves quite a bit like graph A.  However, as V increases, V3 begins to catch up.  The gradually-increasing V3 detracts from dV/dt, causing the growth of V to slow down.  By the time V = 1, V3 is the same, and so dV/dt becomes zero.

We’re getting closer, but still not done.  We don’t want voltage to just become constant; we want it to rise, then fall, then rise again.  In order to achieve this, we add some more complexity by introducing a second variable, with its own differential equation: resistance, or R.

dV/dt = V – V3 – R

dR/dt = V – R

Let’s think about what’s happening here.  In the first line, R detracts from the growth of V.  As R grows, it inhibits dV/dt; it quite literally provides resistance.  In the second line, the growth of R positively correlates with V.  This means that, if voltage gets high, so does the growth of R.  Soon, R will become high enough to properly provide the resistance discussed earlier.  Also, though, the growth of R negatively correlates with its own value.  This prevents R from getting too big.  A large value of R will create a lower, possibly even negative, value of dR/dt.  The result of the above system is Figure 3 graph C.

We’re getting pretty close.  All we have to do now is manipulate various parameters (much like α through δ in the Lotka–Volterra system) to produce the more-realistic spiking pattern seen in graph D.  This is in fact known as the Fitzhugh-Nagumo model (4), and is given (in one form) by the following dynamical system:

dV/dt = 3 · [V – (1/3) · V3 – R]

dR/dt = (1/3) · (V – R + 0.4)

Figure 4: The Hodgkin-Huxley model. (5)  Plotted in MATLAB

Figure 4: The Hodgkin-Huxley model. (5) Plotted in MATLAB

Figure 4 displays a much more complicated system known as the Hodgkin-Huxley model for neuronal excitability. (5)  Graph A shows a model for the spiking of a squid axon with a voltage-gated persistent K+ current, a voltage-gated transient Na+ current, and an Ohmic leak current.  In graph B, we now plot voltage on the X-axis.  On the y-axis, we plot m, the dynamically-defined activation gating variable, corresponding to the probability that a given Na+ channel is open.  The result: a limit cycle, just like the one we saw in Figure 2!  Dynamics somehow houses an array of very different aspects of life under one all-encompassing roof.  It’s enough to make one wonder whether the unusually frequent applicability of math to the physical and biological sciences is a mere coincidence.

My ultimate goal: recreate my model of the basal ganglia using dynamical systems to represent each nucleus.  Until then: I keep exploring!

References

  1. A hilarious and fascinating piece on Predator-Prey dynamics by Bradd Libby.  Also check out the sequel for an utterly mind-blowing extension of the earlier findings.
  2. Lotka-Volterra Equation
  3. Dynamics: the Excitability of Behavior by Abraham and Shaw
  4. Fitzhugh-Nagumo model
  5. Hodgkin-Huxley model
  6. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting by Eugene M. Izhikevich.  A premier text on the dynamics of computational neuroscience.
Advertisements

3 comments on “The Geometry of Behavior

  1. Richard says:

    Josh certainly is a fine expositor of ideas. Presumably his goal to model the essential functioning of the basal ganglia with dynamical systems in place of each nucleus can be completed only with exhaustive knowledge of the various factors in the system, the friction-like resistances and the factors responsible for spikes in neuronal voltage. I suppose this must be an ongoing project of contemporary neuroscience in general.

    He says “It’s enough to make one wonder whether the unusually frequent applicability of math to the physical and biological sciences is a mere coincidence.” It’s almost a perennial question as to why the world should seems to be rationally understandable, or at least, for mathematics to be applicable to it. One response is that it’s not, and that appearances to the contrary are illusions only. Another classic response is something like the Anthropic principle: the world need not have been rational at all, but a world which is partly rational is possible and in such a world nature would be more likely to select for rationality and so rational beings could evolve. This still leaves room for it to be a coincidence that we can understand the physical world through rational means. But there is another view according to which a universe which does not obey mathematical or logical laws is impossible and so a world which does obey them is not a coincidence, but a necessity. For it to be possible it must be logically consistent, and if it is consistent like that, it can be described mathematically. Currently, I am sympathetic with this last view: the world simply must follow the laws of maths and logic, otherwise it would be impossible and so not exist. So I think Josh’s surprise at the end is a bit like Ben’s concerning Eigenvectors in the blog entry, “Vector Spaces”. It’s not so much a surprise (as no alternative is possible) as a fascinating realization of or amusement with the inevitable. There is A LOT more to say about this; the relationship between logic and the world is one of my specialities. So I’d be happy to discuss.

    • Ben says:

      “[T]he world simply must follow the laws of maths and logic, otherwise it would be impossible and not exist.”

      I’ve got a few questions. These are related to ideas raised in Parfit’s Why Anything? Why This? .

      You (and Parfit) seek to explain facts about our world by appeal to these facts’ logical necessity. These arguments, though, seem to presume the existence, at the time of our world’s creation, of logical mechanisms which governed the creation of worlds. These mechanisms, we’d be forced to believe, “screened” “candidate worlds”, permitting logical ones and rejecting unsound ones. We’re left with the world we call our own.

      Perhaps I’m not understanding how an appeal to facts’ logical necessity can explain their physical obtainment.

      How can we assume the existence of logical rules at a “time” when the world didn’t exist yet? How can we assume that our world’s inception took place within some environment respecting fully formed logical laws which resemble our own? Where did these logical laws come from? How were they enforced? To be explicit, which mechanisms would or could have blocked the creation of a logically unsound world?

      The “selection” process need not have been this explicit for your argument to stand. To suggest a logical condition which could have rejected a candidate world, though, seems to offer little insight in the way of our world’s physical creation.

      How was our world physically constructed? What existed before it? What caused it to come into existence? I don’t understand what it means for these questions to have logical answers.

      Excellent characterization of this distinctive “fascinating realization of or amusement with the inevitable”.

      • Josh says:

        I think Richard’s point is precisely that there were, or are, baseline logical rules to which any “potential” world must adhere. Given that, it’s just not coherent to imagine a world that doesn’t abide by these rules. Those worlds aren’t “screened out”, but rather never have the capability of coming into place.

        Kind of a lame analogy, but imagine dropping a coin from waist level onto the ground. You don’t ask “what happened when the coin tried to move up, but then gravity didn’t allow it?” The coin just never tried to move up.

        Of course, all this is only relevant if we abide by the theory that a non-logical world is impossible. Richard mentions other theories which don’t assume this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s