(In a later post I discuss a cleaner way to calculate the Lyapunov exponent for maps and particularly the logistic map, along with Mathematica code.)

I found this method during my Masters while recreating the results of an interesting paper on how some standard tests for chaos fail to distinguish chaos from stochasticity (

*Stochastic neural network model for spontaneous bursting in hippocampal slices*, Biswal and Dasgupta, 2002).**Procedure**

The Lyapunov exponent is a measure of sensitive dependence on initial conditions, i.e. how quickly two nearby states diverge.

Now consider two points in the time-series,

*t*and_{i}*t*, whose values are very close. That means the system reached near the same state at the_{j}*i*^{th}and*j*^{th}iterations. Now consider the two sequences*t*,_{i}*t*,_{i+1}*t*_{i+2 }… and*t*,_{j}*t*,_{j+1}*t*_{j+2 }… We wish to know how these two sequences diverge from each other. For this, consider the distance between the two sequences after*k*steps:*d*(*k*) = |*t*–_{i+k}*t*_{j+k }|. (This is for a 1D time series. For higher dimensions, you can define this to be the Euclidean distance and modify the code accordingly.) If the system is chaotic,*d*(*k*) will initially rise exponentially with*k*. For this, one can plot ln*d*(*k*) vs*k*and apply a linear fit. The slope will be an estimate for the Lyapunov exponent.(Since the system is bounded, the two nearby states will not diverge indefinitely though. Their exponential divergence will stop after some length. We must fit the straight line only within this region.)

Now, this was for a single pair of initial states. The Lyapunov exponent is an average of this divergence exponent over all nearby initial pairs. So for this, define d(

*k*)>, where is averaging over all starting pairs*t*,_{i}*t*, such that the initial distance_{j}*d*(0) = |*t*_{i }– t_{j }| is less than some fixed small value. The program finds all such initial pairs, calculates d(*k*)>, plots it against*k*, and the slope of the initial linear part gives us the Lyapunov exponent.**Python Code**

The following code takes a text file with the time series, ‘timeseries.txt’, as the argument. The text file must contain only the time series values in a single column, no serial numbers or any other text before or after. It asks for the starting diameter within which to limit the initial pairs. It displays how many such pairs it is finding in the time series, so you can vary the diameter based on this.

It outputs a text file, ‘lyapunov.txt’ with two columns,

*k*and d(*k*)>, which you can then plot and fit in the correct region by visual inspection.from math import log def d(series,i,j): return abs(series[i]-series[j]) f=open('timeseries.txt', 'r') series=[float(i) for i in f.read().split()] f.close() N=len(series) eps=input('Initial diameter bound: ') dlist=[[] for i in range(N)] n=0 #number of nearby pairs found for i in range(N): for j in range(i+1,N): if d(series,i,j) < eps: n+=1 print n for k in range(min(N-i,N-j)): dlist[k].append(log(d(series,i+k,j+k))) f=open('lyapunov.txt','w') for i in range(len(dlist)): if len(dlist[i]): print>>f, i, sum(dlist[i])/len(dlist[i]) f.close()

The following is the plot and fit of the resulting data from a logistic map series with an appropriately chosen initial diameter.

I deliberately did not automate the plotting and fitting part, because a. it’s tedious and hard to write the code in a way that runs on most installations, and b. human eyes will do a much more reliable job of identifying where the linear portion ends.

Can you please provide the required program to calculate maximum Lyapunov Exponent for Delay Differential Equation ?

LikeLike

Sorry Santanu for the very late reply. These approaches were for a map. For a flow, i.e. a differential equation, I haven’t read up how the maximum lyapunov exponent is calculated.

LikeLike

Hi, the idea that stochasticity and chaos can’t be distinguished intrigues me. The paper you referred didn’t enlighten me much on first glance. Anyway, I think loosely speaking, the difference is that the former is completely random (like numbers drawn from a probability distribution) while latter is deterministic as it has an underlying equation which can be used to determine what is the next number. The similarity between them is that even in chaos, looking at the number, you can’t tell which is the next number; these numbers don’t have any linear relationship among them.

Am I right?

LikeLike

Mohit, first of all, sorry for the late reply. It’s been on my mind for a while to reply to this, but I just haven’t got around to doing it.

Coming to your question, it is a deep and important question, and in my opinion this paper is a very important one because of that reason. What it basically reports is that dynamics that was previously reported as ‘chaotic’ using certain criteria can be reproduced from a stochastic model, implying that we need to refine our criteria for deciding what is chaotic as opposed to stochastic behaviour.

Chaotic behaviour is mainly characterized by two things:

1. It has to be deterministic

2. It has to be highly sensitive to initial conditions

Stochastic behaviour has neither of these properties. It isn’t deterministic of course, and it cannot be called sensitive to initial conditions because there is no dependence on initial conditions at all.

The reason they often look the same at first glance is, in my opinion, because chaotic behaviour is often bounded as well, and the nonlinear rule throws the phase point around in the bounded phase space in such a way that it’s hard to find a pattern. However, one can use techniques like the cool return map technique to find underlying determinism.

Looking at the time series of a logistic map, for example, no pattern is apparent. But if you plot a simple first return map from the time series, the underlying quadratic rule immediately shows itself. It’s a pretty cool little thing you can try out.

LikeLike

Pingback: Lyapunov exponent of the logistic map (Mathematica Code) | One Life

have you got a code for R thanks

LikeLike

Hi. Can you share me the file .txt? Thanks

LikeLike