Learning Notes

# Definition of Problem

To model probability distribution P(X=n, t) of noises caused by random arrival of electrons in a vacum tube.

# Annotation

λ = Probability constant of electron arrive over a period of time, intensity of Poisson Dist.

n = Total number of electrons arrived at time t

G = Generating function

# Derivation

Let P(n, t) be the probability of a total of n electrons arrived at time t. The probability of an electron arriving over a period Δt can be modulated by a constant  λ:

$\displaystyle P(n+1, t+\Delta t) - P(n, t) = \lambda \Delta t$

Then the probability of having n arrival at t + Δt is the sum of probabilities: 1) Already has n arrival and no arrivals in Δt. 2) Already has n – 1 arrival and one arrivals in Δt. Formulated as follow:

$\displaystyle P(n, t+ \Delta t) = P(n, t)(1 - \lambda \Delta t) + P(n-1, t)(\lambda \Delta t)$

Re-arrange the upper equation we get [Eq.1]

$\displaystyle \frac{\partial P(n, t)}{\partial t} = \frac{P(n, t+\Delta t) - P(n, t)}{\Delta t} = \lambda \left[P(n- 1, t) - P(n, t)\right]$

At this point, we would like to introduce a math skill Generating Function and denotes:

$\displaystyle G(s, t) = \sum_n^{\infty} s^n P(n, t)$

so that:

$\displaystyle \frac{\partial G(s, t)}{\partial t} = \sum_n^{\infty} s^n \frac{\partial P(n, t)}{\partial t}$

$\displaystyle = \lambda \sum_{n=0}^{\infty} s^n \left[P(n-1, t) - P(n, t)\right]$

$\displaystyle = s \cdot \sum_{n=-1}^{\infty} s^n P(n, t)-\sum_{n=0}^{\infty} P(n, t)$

$\displaystyle = s \cdot \sum_{n=0}^{\infty} s^n P(n, t)-\sum_{n=0}^{\infty} P(n, t)$
[P(-1, t) = 0 for all t since number of arrival must not be negative]

$\displaystyle = \lambda(s - 1) \sum_{n=0}^{\infty} s^n P(n, t)$

$= \lambda(s-1) \cdot G(s, t)$

We then get ourselves a p.d.e:

$\displaystyle \frac{\partial G}{\partial t} = \lambda (s-1) G$

separating variables,

$\displaystyle \frac{\partial G}{G} = \lambda (s-1) \partial t$

$\displaystyle \ln{[G(s, t)]} = \lambda t(s-1) + C$

With the initial condition of G(s, 0) being 1 because P(0, 0) = 1 and P(n, 0) = 0 for all n, the particular solution would be:

$\displaystyle G(s, t) = e^{\lambda t(s-1)}G(s, 0)$

Perform Taylor expansion at s = 0:

$\displaystyle G(s, t) = G(0, t) + s\cdot\frac{G'(0, t)}{1!} + s^2 \cdot \frac{G''(0, t)}{2!} + \cdots$

$\displaystyle \sum_n^{\infty} s^n P(n, t) = e^{-\lambda t} + \frac{s}{1!}\cdot(\lambda t)e^{-\lambda t} + \frac{s^2}{2!} e^{-\lambda t}$

By comparing terms in powers of s, we can deduce:

$\displaystyle P(n, t) = \frac{(\lambda t )^n e^{-\lambda t}}{n!}$

which is actually a Poisson Process.

Stochastic – Poisson Process with Python example

Learning Notes

# Taylor Series

$\displaystyle f(x) = f(a) + \frac{f'(a)}{1!}(x - a) + \frac{f''(a)}{2!} (x-a)^2 + \cdots$

$\displaystyle f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n$

# Bra-kets Notation

$\displaystyle \langle m | n \rangle = \delta_{m,n}$

$\displaystyle \sum_n | n \rangle \langle n| = I$

# Probabilities

## Bayes’ theorem

$\displaystyle P(A|B) = \frac{P(B|A)P(A)}{P(B)}$

## Mean/Expectation value

For x being an event mapped by random variable X
$\displaystyle \left = \sum_{x \in \Omega} P(x)\cdot X(x)$

For any function f that takes x as an input
$\displaystyle \left< f[X(x)] \right> = \int_{\Omega} f[X(x)] P(x) dx$

For discrete case
$\displaystyle \left< X_N \right> = \frac{1}{N} \sum_{n=1}^N X(n)$

## Corelations

$\displaystyle \left = \int_\Omega X_1(x_1)X_2(x_2)p_1(x_1)p_2(x_2) dx_1 dx_2$

## Covariance

Variance, squire of S.D.
$\displaystyle \text{var}[X] = \left<[X-\left]^2\right>$

Covariance Matrix
$\displaystyle \langle X_i, X_j \rangle =\langle (X_i - \langle X_i\rangle )(X_j -\langle X_j \rangle ) \rangle = \langle X_i X_j \rangle - \langle X_i \rangle \langle X_j\rangle$

## Moments

The n-th moments of f(x) about point c
$\displaystyle \mu_f (n) = \int_{-\infty}^{\infty} (x-c)^n \cdot f(x) dx$

The n-th moment of random variable X, simply replace f(x) with p(x)
$\displaystyle \mu_X (n) = \int_{\Omega} X(x)^n \cdot p(x) dx$

The n-th central moment of random variable X, the 2-nd central moment is variance
$\displaystyle \overline{\mu}_X (n) = \left<[X-\left]^n\right>$

# Domain Transforms

## Fourier Transform

Continuous
$\displaystyle X(f) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}{\infty} x(t)e^{-i 2\pi ft} dt$

$\displaystyle x(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}{\infty} X(f)e^{i 2\pi ft} df$

Discrete
$\displaystyle F(x) = \frac{1}{\sqrt{N}} \sum_n f(n)e^{-i nx}$

$\displaystyle f(n) = \frac{1}{\sqrt{N}} \sum_x F(x)e^{i nx}$

## Laplace Transform (Z-Transform)

First Define
$\displaystyle s = \sigma + i\omega$

Continuous
$\displaystyle F(s) = \int_0^{\infty} e^{-st} f(t) dt$

$\displaystyle f(t) = \frac{1}{2\pi i} \lim_{T\rightarrow \infty}\int_{\gamma - iT}^{\gamma + iT} e^{st} F(s) ds$

Learning Notes

# Definition of Problem

Consider 1D Brownian motion of a system with infinite amount of particles in equilibrium state. Derive the profile of particle density $f(x, t)$ where x is the position and t is time.

# Annotation

k = Boltzmann’s constant

T = Temperature

η = Viscosity constant of fluid

a = Diameter of particles considered (Assumed spherical shape)

X = Random variable which denotes the a fluctuating force

v = First time derivative of x , namely velocity

# Derivation

Consider two force acting on the particles:

1) A varying force X.

2) A drag force caused by viscosity dependent on particle velocity v:

$\displaystyle \Phi(v) = -6\pi\eta a v$

Through statistical mechanics, it is well established that the mean K.E. in equilibrium is proportional to the temperature of the system [Eq.1].

$\displaystyle \left< \frac{1}{2} mv^2 \right> = \frac{1}{2} kT$

Then we can write Newton’s second law of motion as follow:

$\displaystyle m\frac{d^2 x}{dt^2} = -6\pi\eta a \frac{d x}{dt} + X$

Inspired by [Eq.1], multiply both sides by x and get [Eq.2]:

$\displaystyle m\frac{d^2 x}{dt^2} x = -6\pi\eta a \frac{d x}{dt}x + Xx$

consider the following equations:

$\displaystyle x \frac{d^2 x}{dt^2} =\frac{d}{dt}\left( x\frac{dx}{dt}\right) - v^2 = \frac{1}{2}\frac{d^2 x^2}{dt^2} - v^2$

$\displaystyle x\frac{dx}{dt} = \frac{1}{2}\frac{d x^2}{dt}$

and taking time average of both sides (reducing X term to zeros) of [Eq.2], we have:

$\displaystyle \frac{m}{2}\frac{d^2 \left}{dt^2} -mv^2 = -3\pi\eta a \frac{d \left}{dt} + \left$

$\displaystyle \frac{m}{2}\frac{d^2}{dt^2}\left +3\pi\eta a \frac{d }{dt}\left =kT$

This equation is called stochastic differential equation. The general solution of the equation is:

$\displaystyle \frac{d \left}{dt} = C\exp{(-6\pi \eta a t/m)} + kT/(3\pi\eta a)$

Observations by Langevin suggest the exponential term of the equation approaches zeros rapidly with a time constant of order 10^-8, so it is insignificant if we are considering time average. To recover Einstein’s result, integrate this one more time:

$\displaystyle \left - \left = \frac{kTt}{3\pi\eta a}$

# Reference

Gardiner, Crispin W. Stochastic methods. Springer-Verlag, Berlin–Heidelberg–New York–Tokyo, 1985.

Learning Notes

# Markov Property

To recap, Markov property suggest that previous events has no influence to future events. Formulated by [Eq.1]

$\displaystyle P(\{X_n=i_n\}|\{X_{n-1}=i_{n-1}, X_{n-2}=i_{n-2}, \dots, X_0=i_0 \} )$

$\displaystyle \indent = P(\{X_n=i_n\}|\{X_{n-1}=i_{n-1}\})$

# Transition Probability Matrix

## Transition Property of Markov Chain

Denoting $X_n = \{X_n = i_n\}$,

$\displaystyle P(X_{n+1}, X_n, \dots, X_1)$

$\displaystyle = P(X_{n+1}|X_n, \dots, X_1) \cdot P(X_n, \dots, X_1)$

$\displaystyle = P(X_{n+1}|X_n) \cdot P(X_n, \dots, X_1)$
(by Markov Properties [Eq.1])

$\displaystyle = P(X_{n+1}|X_n) \cdot P(X_n|X_{n-1}) \cdots P(X_1)$

$\displaystyle = P_{n, n + 1} \cdot P_{n - 1, n}\cdots P_{1, 2}\cdot P_1$
(Denote $P(j|i)$ as $P_{i,j}$ )

## Transition Probability written as Matrix

The upper describes the transitional property of a Markov chain. If we write the probabilities of going to j-th state from i-th state into the components of a matrix, we call this a Transition Probability Matrix (TPM) or more commonly the Stochastic Matrix.  In general, it can be written as follow [Eq.2]:

$\displaystyle P =\begin{pmatrix} P_{1,1}&P_{1,2}&\dots &P_{1,j}&\dots &P_{1,n+1}\\P_{2,1}&P_{2,2}&\dots &P_{2,j}&\dots &P_{2,n+1}\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{i,1}&P_{i,2}&\dots &P_{i,j}&\dots &P_{i,n+1}\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{n+1,1}&P_{n+1,2}&\dots &P_{n+1,j}&\dots &P_{n+1,n+1}\\\end{pmatrix}$

Which also carries the transitional property, for instance, the probabilities of going into j-th state from i-th state in exactly n steps would be $P^n$.

### Example:

Suppose we want to know the probabilities of the system ending in each states which initially started from i-th state and went through n steps. First, construct a unit state vector where only the i-th element is 1:

$\displaystyle \pi_i^{(0)} = \{0, 0, \dots, 1, \dots, 0\}$

Then, probabilities we want can be calculate by the multiplication:

$\displaystyle \pi^{(n)} = \pi_i^{(0)} \cdot P^n$

(To be continue)

# Prerequisite

You would need the following to make this work:

1. Matplotlib
2. Imagemagick

Imagemagick has replace the binary “Convert” to “Imagemagick” in version 7.0. If you encounter error when using convert, try to use imagemagick instead (for both Windows and Linux, no idea about MAC OS).

# Syntax

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.animation as animation

mpl.use('Agg')
mpl.rcParams['animation.ffmpeg_path'] = "C:/Path/To/Image/Magick/ffmpeg.exe"
# For Imagemagick 7.0, convert.exe is replaced by magick.exe
mpl.rcParams['animation.convert_path'] = "C:/Path/To/Image/Magick/magick.exe"

# Save the animation
ani = animation.FuncAnimation(fig, animateFunction, frames=N, interval=2, blit=True, repeat=False)
ani.save("./output.gif", writer='imagemagick', fps=60, bitrate=-1)
plt.show()


# Poisson Distribution

The Poisson distribution is in fact originated from binomial distribution, which express probabilities of events counting over a certain period of time. When this period of time becomes infinitely small, the binomial distribution is reduced to the Poisson distribution. The proof can be found here.

The Poisson Distribution can be formulated as follow:

$\displaystyle P(X=k)=\frac{\lambda ^k e^{-\lambda}}{k!}, k \in \mathbb{Z}^+$

where X is a random variable.

Figure 1. Poisson Distribution

# Poisson Process

## Definition

For a random process $X(t)$, it is identified as a Poisson process if it satisfy the following conditions:

1. $X(0) = 0$
2. Each incremental process are independent (i.e. It is a Markov process)
3. $P(X(t)=k) = [(\lambda t)^k e^{-\lambda t}]/k!$

One can think of it as an evolving Poisson distribution which intensity λ scales with time (λ becomes λt) as illustrated in latter parts (Figure 3).

Note: If λ stays constant for all t then the process is identified as a homogeneous Poisson process, which is stationary process.

## Example – Simulation of Poisson processes

Similar to the case in random walk, the Poisson process $\{X(t) \in \mathbb{Z}^+; t \in T\}$ can be formulated as follow [Eq.1]:

$\displaystyle X(t) = X_0 + \sum_{i=1}^{t} X_i$

where by definition we requires X_0 to be zero.

import numpy as np
import matplotlib.pyplot as plt

# Prepare data
N = 50 # step
lambdas = [1, 2, 5]
X_T = [np.random.poisson(lam, size=N) for lam in lambdas]
S = [[np.sum(X[0:i]) for i in xrange(N)] for X in X_T]
X = np.linspace(0, N, N)

# Plot the graph
graphs = [plt.step(X, S[i], label="Lambda = %d"%lambdas[i])[0] for i in xrange(len(lambdas))]
plt.legend(handles=graphs, loc=2)
plt.title("Poisson Process", fontdict={'fontname': 'Times New Roman', 'fontsize': 21}, y=1.03)
plt.ylim(0)
plt.xlim(0)
plt.show()


Figure 2. Poisson Process, k against t

## Further Elaboration of results

To show the upper process follows definition 3, which said [Eq.2]:

$\displaystyle P(X(t)=k) = \frac{(\lambda t)^k e^{-\lambda t}}{k!}$

the graph P( X(t) = k ) against t is plotted w.r.t. different values of λ.

Sees each peaks of different k at different t is actually the expected value of the Poisson process at the same t in Figure 2, it can also be interpreted as the most possible k at time t. An annotated comparison is provided below:

# Evolution of Poisson process

The following animation shows how the probability of a process X(t) = k evolve with time. One can observe two main features:

1. The probability distribution spread wider as time passes.
2. The peak of the probability distribution shifts as time passes, correspond to the simulation in Figure 2.

where both features are actually governed by definition 3 [Eq.2].

Figure 3. Evolution of Poisson Process

Learning Notes

# Definition of Problem

Consider 1D Brownian motion of a system with n particles. Formulate the profile of particle density $f(x, t)$ where x is the position and t is time.

# Annotation

n = Total number of particles in system

τ = Infinitesimal time interval between each change of profile f(x, t)

D = $\{D_1, D_2, \dots, D_N\}$ = A set of random variables denoting the change in x-coordinates of each particle

$\phi (\Delta)$ = Probability density function of each random variables $D_i$, note that this is an even function

# Derivation

Let v = f(x, t) be the particle density. Consider that the change in profile f(x, t) to f(x, t + τ) is equivalent to taking all the changes of x-position of particles $D_i = \Delta$ into account. We therefore have [Eq.1]:

$\displaystyle f(x, t+\tau) = \int_{-\infty}^{\infty} f(x + \Delta, t) \cdot \phi(\Delta)d\Delta$

And because τ is infinitesimal, we can write the derivative [Eq.2]:

$\displaystyle f(x, t+\tau) = f(x, t) + \tau \cdot \frac{\partial f}{\partial t}$

In the next step, expand f(x + Δ, t) about the point x + Δ = x [Eq.3] :

$\displaystyle f(x + \Delta, t) \\ \indent = f(x, t) + [x + \Delta - x]\cdot \frac{\partial f}{\partial x} + \frac{[x + \Delta - x]^2}{2!}\cdot\frac{\partial^2 f}{\partial x^2} + \cdots \\ \indent = f(x, t) + \Delta\cdot \frac{\partial f}{\partial x} + \frac{\Delta^2}{2!}\cdot\frac{\partial^2 f}{\partial x^2} + \cdots$

Combining the three equations, and lose the insignificant terms of [Eq.3], obtain [Eq.4]:

$\displaystyle f + \tau\frac{\partial f}{\partial t} = \\ \indent \indent f \int_{-\infty}^{\infty}\phi (\Delta)d\Delta +\frac{\partial f}{\partial x} \int_{-\infty}^{\infty}\Delta\cdot\phi (\Delta)d\Delta + \frac{\partial^2 f}{\partial x^2}\int_{-\infty}^{\infty}\frac{\Delta^2}{2!}\cdot\phi (\Delta)d\Delta$

Because φ(Δ) is even, the even terms of [Eq.4] vanish and the first integral sums to 1 by definition (normalization constrain of probability), we have the partial deferential equation [Eq.5]:

$\displaystyle \frac{\partial f}{\partial t}= \zeta \cdot \frac{\partial^2 f}{\partial x^2}$

where:

$\displaystyle \zeta = \frac{1}{\tau} \int_{-\infty}^{\infty}\frac{\Delta^2}{2!}\cdot\phi (\Delta)d\Delta$

Solving this equation [Eq.5] for the initial conditions where infinite particles diffuse from single point (x = 0) with the constrain of particle conservation, which are expressed as:

$\displaystyle f(x, t)=0, \{\forall x \neq 0 \text{ and } t=0\}$ and $\displaystyle \int_{-\infty}^{\infty} f(x, t) dx = n$

gives the solution [Eq.6]:

$\displaystyle f(x, t) = \frac{\exp ({-x^2/4\zeta t})}{\sqrt{t}}$

# Side Notes

[Eq.5] is actually the famous equation of diffusion for continuous time and continuous space, also known as Fokker-Planck equation or Kolmogorov’s equation.

We can also know from [Eq.6] that the mean square velocity of the particles is $\sqrt{2\zeta t}$

# Reference

Gardiner, Crispin W. Stochastic methods. Springer-Verlag, Berlin–Heidelberg–New York–Tokyo, 1985.

Einstein, Albert. Investigations on the Theory of the Brownian Movement. Courier Corporation, 1956.