Lyapunov exponent of the logistic map (Mathematica Code)

In a previous post I’d shown a way to get the Lyapunov exponent from the time series data of any map. In this quick tutorial, I’ll show you a cleaner way to get the Lyapunov exponent for the specific case of the logistic map, and then using a really short script in Mathematica, plot it against r.

First the mathematical analysis that simplifies the expression for the Lyapunov exponent for a map, and particularly the logistic map. This discussion follows this article.

Suppose the initial infinitesimal perturbation is \delta x_{0}. Then we have, for n\rightarrow\infty:

\left|\delta x_{n}\right|=\left|\delta x_{0}\right|e^{\lambda n}\Rightarrow e^{\lambda n}=\underset{\delta x_{0}\rightarrow0}{\lim}\left|\frac{\delta x_{n}}{\delta x_{0}}\right|=\left|\frac{dx_{n}}{dx_{0}}\right|.

But we can write:

\frac{dx_{n}}{dx_{0}}=\frac{dx_{n}}{dx_{n-1}}\frac{dx_{n-1}}{dx_{n-2}}\ldots\frac{dx_{1}}{dx_{0}},

where each x_{i}=f(x_{i-1}). So we have:

\frac{dx_{n}}{dx_{0}}=f'(x_{n-1})f'(x_{n-2})\ldots f'(x_{0}).

Therefore,

\;e^{\lambda n}=\left|f'(x_{n-1})f'(x_{n-2})\ldots f'(x_{0})\right|

\Rightarrow\lambda(r;x_{0})=\underset{n\rightarrow\infty}{\lim}\frac{1}{n}\ln\left|f'(x_{n-1})f'(x_{n-2})\ldots f'(x_{0})\right|

=\underset{n\rightarrow\infty}{\lim}\frac{1}{n}\sum\limits_{k=0}^{n-1}\ln\left|f'(x_{k})\right|.

For the logistic map, f'(x)=r(1-2x).
So we have:

\lambda(r;x_{0})=\underset{n\rightarrow\infty}{\lim}\frac{1}{n}\sum\limits_{k=0}^{n-1}\ln\left|r(1-2x_{k})\right|.

We can put the above formula in a short Mathematica script to obtain \lambda as a function of r and plot it. The following is the code:

\[Lambda][r_] := Module[{f, l},
   f[x_] := r x (1 - x);
   l[x_] := Log[Abs[r (1 - 2 x)]];
   Mean[l[NestList[f, 0.1, 1*^2]]]];
Plot[\[Lambda][r], {r, 0, 4}, PlotStyle -> Thickness[.0001], 
 AxesLabel -> {"r", "\[Lambda](r)"}]

And the following is the output:
Lyapunov exponent for logistic map (Mathematica code)In the line that uses Nestlist, we specify the starting point of the trajectories. However, I noticed that the output does not depend on the starting point.

Calculating the Lyapunov Exponent of a Time Series (with python code)

(In a later post I discuss a cleaner way to calculate the Lyapunov exponent for maps and particularly the logistic map, along with Mathematica code.)
I found this method during my Masters while recreating the results of an interesting paper on how some standard tests for chaos fail to distinguish chaos from stochasticity (Stochastic neural network model for spontaneous bursting in hippocampal slices, Biswal and Dasgupta, 2002).
Procedure
The Lyapunov exponent is a measure of sensitive dependence on initial conditions, i.e. how quickly two nearby states diverge.
Now consider two points in the time-series, ti and tj, whose values are very close. That means the system reached near the same state at the ith and jth iterations. Now consider the two sequences  ti,  ti+1,  ti+2 … and  tj,  tj+1,  tj+2 … We wish to know how these two sequences diverge from each other. For this, consider the distance between the two sequences after k steps: d(k) = | ti+ktj+k |. (This is for a 1D time series. For higher dimensions, you can define this to be the Euclidean distance and modify the code accordingly.) If the system is chaotic, d(k) will initially rise exponentially with k. For this, one can plot ln d(k) vs k and apply a linear fit. The slope will be an estimate for the Lyapunov exponent.
(Since the system is bounded, the two nearby states will not diverge indefinitely though. Their exponential divergence will stop after some length. We must fit the straight line only within this region.)
Now, this was for a single pair of initial states. The Lyapunov exponent is an average of this divergence exponent over all nearby initial pairs. So for this, define d(k)>, where is averaging over all starting pairs  ti,  tj, such that the initial distance d(0) = | t– tj | is less than some fixed small value. The program finds all such initial pairs, calculates d(k)>, plots it against k, and the slope of the initial linear part gives us the Lyapunov exponent.
Python Code
The following code takes a text file with the time series, ‘timeseries.txt’, as the argument. The text file must contain only the time series values in a single column, no serial numbers or any other text before or after. It asks for the starting diameter within which to limit the initial pairs. It displays how many such pairs it is finding in the time series, so you can vary the diameter based on this.
It outputs a text file, ‘lyapunov.txt’ with two columns, k and d(k)>, which you can then plot and fit in the correct region by visual inspection.
from math import log

def d(series,i,j):
    return abs(series[i]-series[j])

f=open('timeseries.txt', 'r')
series=[float(i) for i in f.read().split()]
f.close()
N=len(series)
eps=input('Initial diameter bound: ')
dlist=[[] for i in range(N)]
n=0 #number of nearby pairs found
for i in range(N):
    for j in range(i+1,N):
        if d(series,i,j) < eps:
            n+=1
            print n
            for k in range(min(N-i,N-j)):
                dlist[k].append(log(d(series,i+k,j+k)))
f=open('lyapunov.txt','w')
for i in range(len(dlist)):
    if len(dlist[i]):
        print>>f, i, sum(dlist[i])/len(dlist[i])
f.close()

The following is the plot and fit of the resulting data from a logistic map series with an appropriately chosen initial diameter.

Lyapunov Exponent of Logistic Map

I deliberately did not automate the plotting and fitting part, because a. it’s tedious and hard to write the code in a way that runs on most installations, and b. human eyes will do a much more reliable job of identifying where the linear portion ends.

R code for multivariate random-walk Metropolis sampling

I couldn’t find a simple R code for random-walk Metropolis sampling (the symmetric proposal version of Metropolis Hastings sampling) from a multivariate target distribution in arbitrary dimensions, so I wrote one. This is also my first R code.
It requires the package MASS to sample from the multivariate normal proposal distribution using the mvrnorm function. If you are using R on Windows, you can download the package zip for Windows from the link, and use Packages > Install package(s) from local zip files… from the GUI to install the package.
The reason I couldn’t write the code for a general Metropolis algorithm (i.e. for any arbitrary symmetric proposal distribution, not just normal) or a more general Metropolis-Hastings algorithm (with any arbitrary proposal distribution, symmetric or not) is that generating the proposal point would then require sampling from an arbitrary proposal distribution. This is only easy for a few standard distributions, but hard in general (which is the point of using such algorithms in the first place).

I. Function

The following is the function that does the Random Walk Metropolis-Hastings sampling when supplied with the required arguments. Notes about the arguments follow the code.

rwmetro <- function(target,N,x,VCOV,burnin=0)
{
    require(MASS)   #requires package MASS for normal sampling
    samples <- x
    for (i in 2:(burnin+N))
    {
        prop <- mvrnorm(n = 1, x, VCOV)
        if (runif(1) < min(1, target(prop)/target(x)))
            x <- prop
        samples <- rbind(samples,x)
    }
    samples[(burnin+1):(N+burnin),]
}

II. Arguments

  1. target function
    The function defining the multivariate target distribution, written as a function of an n-vector, where n is the number of variables on which the distribution is defined. The different variables of your distribution must be written as x[1], x[2] etc.
    Example
    An example is the following, defining the function f(x,y)=exp(-5*abs(x^2+y^2-1)) in two dimensions:

    ring2D <- function(x)	# x is a vector
    {
    	exp(-5*abs(x[1]^2+x[2]^2-1))
    }
    
  2. N integer
    The final sample size (i.e., excluding the burn-in length).
    Example
    100
  3. x numeric vector
    The starting point (vector) of your Metropolis-Hastings chain. This vector needs to be the same length as the dimension of the target distribution. Your target function must be able to accept this vector as an argument.
    Example
    c(-0.1,20.5)
  4. VCOV numeric matrix
    The variance-covariance matrix for the multivariate normal that is used as the proposal distribution for random-walk Metropolis-Hastings. The length of this matrix also has to be the same as the dimension of the target distribution, i.e. the length of vectors acceptable by target and the length of x. You can vary the entries of this matrix and observe your results to see what works better for sampling your target distribution.
    Example
    The following line defines a matrix in two dimensions with .01 variance for each variable and no covariance between them.

    vcov2D <- .01*diag(2)
    
  5. burnin (optional) integer
    The ‘burn-in’ length for the chain. The number specified will be the number of initial samples chucked. If nothing is specified, it’s taken to be 0.
    Example
    20

III. Output
numeric matrix
The output is a matrix where each row is a sample from your target distribution, excluding the initial burn-in samples. The number of rows is thus the sample size, and the number of columns is equal to the dimension of the target distribution. You can use this matrix however you want, to save, visualize or analyze your sample.
Example
An example output of sampling a 2D distribution with sample size 5. Each row is an x,y sample.

[1,] 0.12961923 0.03708061
[2,] 0.10765053 -0.02798036
[3,] 0.01112930 -0.07255766
[4,] 0.06049625 -0.04546265
[5,] 0.1832164 -0.1244354

IV. Usage Example
Let’s take the target distribution we used as an example above, f(x,y)=exp(-5*abs(x^2+y^2-1)). This looks like a ring of radius 1 rising from the x-y plane and centered at the origin. Here is a gnuplot surface plot of the distribution (because I found it frustratingly hard to figure out a level plot in R):

Target Distribution in Gnuplot
Let’s generate a sample of size 40,000 from this distribution with the starting point (0,0) and without any burn-in length, and with the variance-covariance matrix we defined before. This is done by calling the function with the correct arguments:

ringsample2D <- rwmetro(ring2D,40000,c(0,0), vcov2D)

This assumes that you have already defined the target ring2D and the matrix vcov2D in the way explained, preceding this function call.
ringsample now contains a random sample from the distribution.
With the following one-line R code, I made a scatter plot of the sample from the ringsample matrix.

plot(ringsample2D[,1], ringsample2D[,2], xlim=c(-1.5,1.5),ylim=c(-1.5,1.5), main='Metropolis-Hastings Sample',xlab='x', ylab='y', pch='.')

The following is the plot:
Metropolis-Hastings Sample, scatter plot

Putting all the code together in sequence, this is what the full code for defining the arguments, drawing the sample, and making the plot for this example looks like:

# Define arguments
ring2D <- function(x)    # x is a vector
{
    exp(-5*abs(x[1]^2+x[2]^2-1))
}

vcov2D <- .01*diag(2)

#Define the sampling function
rwmetro <- function(target,N,x,VCOV,burnin=0)
{
    require(MASS)   #requires package MASS for normal sampling
    samples <- x
    for (i in 2:(burnin+N))
    {
        prop <- mvrnorm(n = 1, x, VCOV)
        if (runif(1) < min(1, target(prop)/target(x)))
            x <- prop
        samples <- rbind(samples,x)
    }
    samples[(burnin+1):(N+burnin),]
}

# Call the sampling function with the arguments
ringsample2D <- rwmetro(ring2D,40000,c(0,0),vcov2D)

# Use the sample
plot(ringsample2D[,1], ringsample2D[,2], xlim=c(-1.5,1.5),ylim=c(-1.5,1.5), main='Metropolis-Hastings Sample',xlab='x', ylab='y', pch='.')

The following is another sample, this time for a 3-variable function, f(x,y,z)=exp(-5*abs(x^2+y^2+z^2-1)). This is just the 3D version of the previous function, and its density is mostly concentrated on a spherical shell of radius 1 and of some thickness, centered at (0,0,0). Notice how we never have to spare a thought about normalizing these functions to get the sample, which is one of the advantages of the Metropolis-Hastings algorithm.
The target function is now:

ring3D <- function(x)	# x is a vector
{
	exp(-5*abs(x[1]^2+x[2]^2+x[3]^2-1))
}

and the variance-covariance matrix is, say, .01 down the diagonal again:

vcov3D <- .01*diag(3)

Let’s go for a sample of 20000 points after a burn-in of 20000, starting from (0,0,0). The function is then called as:

ringsample3D <- rwmetro(ring3D,20000,c(0,0,0),vcov3D,20000)

ringsample3D is now a 3×1000 matrix, and I use the following to get a rotatable 3D plot of it (requires the rgl package):

require(rgl)
plot3d(ringsample3D[,1], ringsample3D[,2], ringsample3D[,3], xlab='x',ylab='y', zlab='z',col='red', size=1)

Metropolis Sample

I would have preferred a spinning 3D gif, but I’m too tired to work that out right now.
Share the knowledge.

Javascript Slideshow Code

I prefer writing all my web-design code from scratch. So I use only text editors to write HTML directly, and raw JavaScript (no jQuery etc). While working on a javascript slideshow for Artarium, I came across a lot of problems in transitioning the images. My javascript was changing the src for the image, but I needed to also resize it to the dimensions of the new image before displaying it. Finally, after a lot of time, effort and fruitless migrations to online tutorials and troubleshooters, I came up with the exact configuration that works. On a transition the image disappears, resizes, then reappears. In between this, if the next image takes too long to load, you could display some loading animation, as I did. Just put an animated gif there permanently with a lower z-index than the photo. This will cause it to show between transitions. For the transitions itself you could use a CSS3 transition of opacity as I did and not have to worry about further code for transition effects.

Here is the JavaScript code with some explanatory comments. I have added a preloading function and keyboard navigation. I have not explained everything in detail as I have assumed the user will have standard web-designing experience, in which case this should very well suffice.

JavaScript:

function keyNavigate(e) //to enable slideshow navigation using right and left arrow keys
{
    if(e.which==39)
         pre_next();
    else if (e.which==37)
        pre_previous();
}

var i=0,imax=n; //put the # of slideshow images as n here

function imagearray() //prepares things on page load and starts preloading images
{
    preloader=new Image()
    var j=0;
    captions = new Array();
    captions = ['caption1', 'caption2', ... 'caption n'];
    document.photo.src="directory/photo1.jpg"; //assumes photos are in 'directory'
    document.getElementById('navigation-count').innerHTML="0/"+(imax); //sets slide number
    for(j=1; j    {
        filename="directory/photo"+j+".jpg"; //assumes photos are 'photo1.jpg', 'photo2.jpg' etc
        preloader.src=filename;
    }
}

var imgHeight;
var imgWidth;
var newImg;

function resize() //to resize as image changes
{
    imgHeight = this.height;
    imgWidth = this.width;
    if (imgWidth/imgHeight < 2.25) //any desired criterion
    {
        document.photo.style.height='355px'; //or whatever else
    }
    else
    {
        document.photo.style.width="95%";
    }
    document.photo.style.opacity=1; //photo appears only after it has been resized
    document.getElementById('caption').innerHTML=captions[i-1]; //caption changes
    document.getElementById('caption').style.opacity=1; //caption appears
    document.getElementById('navigation-count').innerHTML=(i)+"/"+(imax); //slide count changes
    return true;
}

function pre_next() //to ensure resizing occurs after picture disappears
{
    document.photo.style.opacity=0;
    document.getElementById('caption').style.opacity=0;
    setTimeout("next()",500);
}

function next()
{
    if (i==imax)
    {i=0;}
    i++;
    newImg=new Image();
    newImg.src="directory/photo"+i+".jpg";
    document.photo.src=newImg.src;
    newImg.onload = resize; //resize function is called
}

function pre_previous()
{
    document.photo.style.opacity=0;
    document.getElementById('caption').style.opacity=0;
    setTimeout("previous()",500);
}

function previous()
{
    if (i==1)
    {i=imax+1;}
    i--;
    newImg=new Image();
    newImg.src="directory/photo"+i+".jpg";
    document.photo.src=newImg.src;
    newImg.onload = resize;
}

This JavaScript alone does not suffice. Here’s some things you need to do with the HTML for this to work:

  1. The image element which changes in the slideshow should have a name=”photo” attribute (used on line 17 etc. of the JavaScript).
  2. Add <body onload = “imagearray(), next()”  onkeydown=”keyNavigate(event)”>  to the HTML body. The next() is required to display the first photo and initialize the caption and slide count.
  3. The slide transition occurs by pre_next() and pre_previous() functions. So any event that you want will trigger a transition should call these functions (as used in the keyboard navigation part), not next() or previous().
  4. The ‘caption’ div in the HTML holds the caption, while the ‘navigation-count’ div holds the slide number. Place them as you require.

You can find this code at work in any gallery at Artarium, unless I change it in the future. If you hit a block using this code, there’s nothing a couple of Google searches won’t clear for you. If there’s a problem that persists even after you have done your research thoroughly, leave a comment (be specific) with your e-mail and I promise to try to help.