Computational Methods for Nonlinear Systems

Physics 7682 - Fall 2014

Instructor: Chris Myers

Mondays & Fridays 1:30-3:30, Rockefeller B3 (directions)

Chaos and Lyupanov Exponents


We have five exercises on discrete maps. Before working on this one, you should first do the Logistic Map exercises.

Chaotic dynamical systems have sensitive dependence on initial conditions. This is commonly described as the "butterfly effect": the flap of a butterfly's wing in Brazil can build up to change a tornado later in Texas. In this exercise, we measure the sensitivity to initial conditions for the logistic map by introducing the Lyapunov exponent.

Lyupanov Exponents

The mathematics community lumps together continuous dynamical evolution laws and discrete mappings as both being dynamical systems. The general stretching and folding exhibited by our map is often seen in driven physical systems without conservation laws. In this exercise, we will focus on values of $\mu$ near one, where the motion is mostly chaotic. Chaos is sometimes defined as motion where the final posi- tion depends sensitively on the initial conditions. Two trajectories, starting a distance $\epsilon$ apart, will typically drift apart in time as $e^{\lambda t}$, where $\lambda$ is the Lyapunov exponent for the chaotic dynamics.

Learning Goals

Science: You will learn about Chaos, discrete maps, and lyapunov exponents.

Computation: You will learn about iterative algorithms, and extracting asymptotic behavior.


  1. start and ipython session (either in a terminal or a notebook -- with the pylab flag) and %run
  2. Make a graph of the iterates of x for two nearby values of x0, with mu=0.9. IE. let y0=x0+eps, where eps is some small number that you choose (see how mall you can make it). Plot the sequence x0, f(x0), f(f(x0)), . . . , f[n](x0) and y0, f(y0), . . . We call the system chaotic if these trajectories diverge exponentially. Here we will systematically explore this behavior.
  3. Download the file and rename it "".
  4. Open this file in a text editor (kate or emacs -- we recommend against using kedit) or load it into the IPython dashboard (started with ipython notebook --pylab inline.
  5. Open a terminal window, move to the correct directory and start ipython, or click on the notebook in the dashboard. You will find it convenient to start python with the --pylab flag -- ie type ipython --pylab or ipython notebook --pylab.
  6. Write the function TrajectoryDifference which will generate the difference between two nearbye trajectories.
  7. Test it out. How small of difference in starting points can you use?
  8. Write a function PlotTrajectoryDifference which calls this previous one and produces a plot of the trajectory differences on a semilog plot. Does it produce a straight line? This is a sign of exponential growth.
  9. We now need to do a least squares fit to this data to a straight line. IE. we want to find $c$ and $\lambda$ such that $|x_n-y_n|$ is close to $c e^{\lambda n}$. More quantitatively, we want to minimize the sum of the logarithmic residuals, $$\chi^2= \sum_n [\log(|x_n-y_n|)-(d+\lambda n)]^2$$, where $d=\log(c)$. This is a linear least squares problem. We give 3 ways to do this optimization (don't feel obliged to go through each method -- but it is good to learn a few different methods.
    • Linear least squares problems like this can be solved using linear algebra. We want $\partial\chi^2/\partial d=0$ and $\partial\chi^2/\partial \lambda=0$. These two equations can be combined into a matrix equation $$ \left(\begin{array}{c} \sum_n \log|x_n-y_n|\\\sum_n n\log|x_n-y_n|\end{array}\right)= \left(\begin{array}{cc} \sum_n 1&\sum_n n\\ \sum_n n&\sum_n n^2\end{array}\right) \left(\begin{array}{c} d\\\lambda\end{array}\right) $$ The vector on the left is easily calculated. The matrix on the right can be written down analytically, and inverted. This then yields an expression for $d$ and $\lambda$. Fill in the steps, and write a function which takes a trajectory difference and calculates $d$ and $\lambda$. [A skeleton function is not included in the hints file.]
    • The method just shown is mechanical. You can readily do it for any fit function which is linear in some parameters. Since it is mechanical, why not have the computer generate the relevant matrices and take the inverses. Not surprisingly, scipy already has a command for this, it is scipy.linalg.lstsq. The intuitive way to understand this function is to write our desired "equation" in matrix form: $$ \left(\begin{array}{c} \log|x_1-y_1|\\ \log|x_2-y_2|\\ \log|x_3-y_3|\\ \ldots \end{array}\right)\sim \left(\begin{array}{cc} 1&1\\ 1&2\\ 1&3\\ \ldots\end{array}\right) \left(\begin{array}{c} d\\\lambda\end{array}\right) $$ In the scipy documentation, the vector on the left is called "b", and the matrix on the right is called "a". Write a function which calculates "a" and "b", then calls scipy.linalg.lstsq(a, b). [A skeleton function is not included in the hints file.]
    • As a final approach, one can use a more general minimization technique. This is known as "nonlinear least squares." This is pretty inefficient, but works well enough in the present case. It is very general, and can be used in cases where the model is not linear.
      • Write a function LyapunovFitFunc which takes as its arguments a tuple p=(lambda,d) and a trajectory difference, and returns a list of the log residuals, $r_n=log(|x_n-y_n|) - (d + \lambda n) $
      • Test this function out by plotting the resulting residuals for different "guesses" of lambda and d. Can you find a best fit by trial and error?
      • Write a function FitLyapunovExponent which uses this function and the scipy routine scipy.optimize.leastsq to find the optimal value of lambda and d. You can use the args argument of scipy.optimize.leastsq to pass the data to LyapunovFitFunc, or you can use one of the other closure strategies discussed in the pendulum module.
  10. Check to see if you agree with the fits. The easiest way to do this is to plot the trajectory difference and the fit on the same graph. Write a routine PlotFit which does this.
  11. Bonus: How does the lyupanov exponent depend on x0? How does it depend on mu?
  12. Double bonus: Consider the map $f_{pq}(x)=4 py(1-y)$ where $y=4 q x(1-x)$. Make a density plot of the lyupanov exponent as a function of $p$ and $q$.