## Hi, I am migrating!

Because of the annoying fact that **latex support is supper weak** for official wordpress, I am moving to community wordpress.

Stay tuned to: My new site

Skip to content
# Migration Notice

## Hi, I am migrating!

# Stochastic – Particle Filtering & Markov Chain Monte Carlo (MCMC) with python example

# Definition

## Particle

## MCMC

## Particle Filtering

# MCMC Methods

## Gibbs Sampling

## Metropolis-Hastings

# See Also

# Reference

# Stochastic – Common Distributions

# Introduction

# Distributions

## Gaussian Distribution

## Poisson Distribution

## Bernoulli

## Geometric

## Exponential

## Gamma

## Rayleigh

# Stochastic – Wiener Process

# Annotation

# Definition

# Solving the PDE

# Interpretation of result

# Reference

# See also

# Python – Reminder to configuring Jupyter Qtconsole

# Tutorial

## Setting external editor

# Python – Scrapping Javascript Driven Web

## Hi, I am migrating!

# Stochastic – Differential Chapman-Kolmogorov Equation

# Introduction

# Chapman-Kolomogov Equation

# Differential Chapman-Kolomogov Equation

## Assumptions

## Derivation

# Interpreting the equation

## Discontinuous jumps

## Fokker-Planck Equation

# Reference

Because of the annoying fact that **latex support is supper weak** for official wordpress, I am moving to community wordpress.

Stay tuned to: My new site

A particle can be seen as an evaluation of all random variables in a joint distribution.

Examples:

MCMC refers to methods for randomly sample particles from a joint distribution with a Markov Chain.

Particle Filtering is also termed Sequential Monte Carlo. It refers to the process of repeatedly sampling, cast votes after each iteration based on sampled particles and modify the next sampling based on the votes in order to obtain the probability distribution of some un-observable states.

Formally, let **x **be the unobservable states and **y** be the observable states related to **x**. Suppose we receive observations of **y** at each time step *k, *we can write the probability based on a Markov Chain:

Based on Chapman-Kolmogorov Equation and Bayes Theorem, the conditional probability distribution of latent states **x **based on priori knowledge **y** is:

**Unknown:** Joint distribution

**Known:** Conditional Probability

**Goal: **Obtain an estimation of the joint distribution

Steps:

- Choose an initial value for the variable of interest.
- Compute distribution by randomly fixing “others” variable for some
- Sample from distribution to get a realization of , then update the conditional probability correspondingly,
- Sample the target
- Do step 2 to step 3 repeatedly for all for
*k*iterations.

An implementation is given below:

def main(): """ This program demonstrates a two-variable Gibbs sampling iteration. X(size), Y(size) Samplers which realize corresponding variables. PX, PY Predefined probability distribution of the two random variable. PX and PY are what we wish to estimate and is often unknown in pactice. properties Property of the pdf PX and PY, including the domain, resolution and a norm constant which is for plotting p.m.f :return None: """ X, Y, PX, PY, properties = GenerateSamplers() w = np.linspace( properties['domain'][0], properties['domain'][1], properties['resolution']) Xcollection = [] x_k = X(1) # Initial sampling y_0 = Y(1) # Initial sampling PYcX = PY/x_k # P(Y|X=x_k), should be know from statistical data instead PXcY = PX/y_0 # P(X|Y=y_0), should be know from statistical data also PYcX /= PYcX.sum() # Normalizing the conditional probabilities PXcY /= PXcY.sum() for k in xrange(50000): PYcX /= x_k # Update conditional probability PYcX /= PYcX.sum() # Normalize y_k = np.random.choice(w, p=PYcX, size=1) # sample from new probability distribution PXcY /= y_k # Update conditional probability PXcY /= PXcY.sum() # Normalize x_k = np.random.choice(w, p=PXcY, size=1) Xcollection.append(x_k) # Record the sample # Plotting plt.hist(np.array(Xcollection), bins=200, normed=1, alpha=0.5) plt.plot(w, PX/properties['normConstant']) plt.show() pass if __name__ == '__main__': main()

And the GenerateSampler() function:

def GenerateSamplers(): """ Creates a pair of random variables, one probability distribution is a gaussian mixture, another is a simple gaussian with mean 0 and sd 10. Domain of the sample is set to -10 to 10 :return [lambda: sample1, lambda: sample2: """ # Properties settings resolution = 2000 # 2000 partitions between whole domain domain = [-10, 10] gm = {'means': [-1, 2, -4], 'sds': [0.4, 8, 3], 'weight': [0.1, 0.6, 0.3]} gy = {'means': 0, 'sds': 5} # define a normed gaussian def Gaussian(mean, var, x): return 1 / (var * np.sqrt(2 * np.pi)) * np.exp(-0.5 * (x - mean) ** 2 / var ** 2) w = np.linspace(domain[0], domain[1], resolution) # Generate pdf PX = np.sum([gm['weight'][i]*Gaussian(gm['means'][i], gm['sds'][i], w) for i in xrange(len(gm['means']))], axis=0) PY = Gaussian(gy['means'], gy['sds'], w) # Normalization PX /= PX.sum() PY /= PY.sum() # Create sampler functions X = lambda size: np.random.choice(w, p=PX, size=size) Y = lambda size: np.random.choice(w, p=PY, size=size) properties = {'resolution': resolution, 'domain': domain, 'normConstant': (domain[1] - domain[0])/float(resolution - 1)} return X, Y, PX, PY, properties

The result is the following figure, where P(X) is a mixture of gaussians (linear combination of gaussians):

Stochastic – Stationary Process Stochastic

Each random variables can have a different probability distribution. Such distribution has a very dominant effect on the behavior of a stochastic process as described in previous articles Stochastic – Poisson Process and Stochastic – Random Walk.

Some commonly used distribution are recorded here.

Properties: If two Gaussian variable *X *and *Y* are independent, then has Rayleigh distribution of the same variance as *X* and *Y.*

= Transition probability from state (*x_0, t_0*) to (*x, t*)

= Generating function

= Sample path of a Wiener process

The definition of Wiener process is derived from the Fokker-Planck Equation, where the jump term of the master equation (or the Differential Chapman-Komogorov Equation) vanishes, and the coefficient of **drift term A** is zero and of

A Wiener process is a Markov process which transitional probabilities fufill the upper equation.

Introduce the generating function [Eq.2]:

and also the Bra-ket notation, defined as follow:

**(Orthogonality)**

**(Completeness)**

A Bra or ket are often referred as “basis”, because of the properties stated above, it is clear that once you define one of the basis, you can readily construct the other complement basis.

Rewriting the generating function:

The complement basis can be defined as follow w.r.t the definition of Dirac-Delta function:

The generating function is selected so that when combined with [Eq.1] (i.e. the Fokker-Plank Equation) such that satisfies the following:

(solve by separation of variables)

Consider the initial condition: , we can solve for *g(s, t)*:

Then by the property of Bra-ket:

Using integration by parts, we finally obtain the transition probability [Eq.3]:

One can easily identify that the transitional probability is a Gaussian, then the actual process follows [Eq.3] and will have center and variance as follow:

Gardiner, C. “Stochastic methods: a handbook for the natural and social sciences 4th ed.(2009).”

Stochastic Process

Stochastic – Differential Chapman-Kolmogorov Equation

The configuration details can be found here. You would need to create a file “jupyter_qtconsole_config.py” in your director ~/.jupyter.

c.JupyterWidget.editor = u'Code' # Set visual studio code to the default editor

Because of the annoying fact that **latex support is supper weak** for official wordpress, I am moving to community wordpress.

For this post visit: http://learningnotes.fromosia.com/index.php/2017/04/06/python-scrapping-javascript-driven-web/

## Required Packages

`dryscrape`

Note that this package has no official Windows release. This post will be based on Ubuntu.

## Installation

sudo apt-get install qt5-default qt5-qmake libqt5webkit5-dev xvfb sudo pip -H install webkit-server sudo pip -H install dryscrape## Tutorial

## Using XPath to locate web content

Commonly used syntax:

Syntax Effect // Search all children recursively under current node / Search all children under current node tag[@att=’val’] Search all ‘tag’ with ‘att’ attribute equal ‘val’ ## Examples

XML Content

<div><span id="DecentTag">First content to scrape </span> <span class="Distraction"><span class="Distraction"> <span class="DecentClass"> Second content to scrape</span></span></span> <div><span class="InnerSelf">Nope, nope, nope</span></div> </div>Then to get the three contents, you can use the following syntax

id('DecentTag') /body/div/span[@class='DecentClass'] /body//span[@class='InnerSelf']## Using python to scrape web contents

If your target data doesn’t requires javascript running on the client, you can simply use python’s standard packages

`requests`

to obtain a string of web content following the example belowimport lxml.html import requests url = "http://stackoverflow.com/help" xpath = "id('help-index')/div[2]" headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} r = requests.get(url, headers=headers) tree = lxml.html.fromstring(r.content) element = tree.xpath(xpath) content = element.text_content()## Using python to scrape javascript driven web

If your target is updated by javascript from time to time, simple python

`request`

will not obtain what you want to get. Here we introduce a linux python package`dryscrape`

. A simple example is given below:import dryscrape dryscrape.start_xvfb() sess = dryscrape.Session() sess.visit("http://stackoverflow.com/help") q = sess.at_xpath("some path") content = q.text()As simple as that

Here we do not show the derivation of Differential Chapman-Kolmogorov Equation, instead, we only show how to interpret the result. In the following sections, it is assumed that the stochastic process has Markov properties and the sample paths are always continuous and satisfy [Eq.1]:

which practically says that integrating all possibility of changes of states **z **to **x **(i.e. the “distance”) that are greater than *δ *with *δ* > 0, the result probability is zero and, thus, there are no possibility that there are any changes of states within an infinitesimal period of time Δ*t*. This is also called the Lindeberg condition.

The Chapman-Kolomogov Equation defines, for a Markov Process, the conditional probabilities [Eq.2]:

W.r.t that, we requires the following properties in order to reduce [Eq.2] into a **differential equation** for all *δ* > 0 and denote condition |**x-z**| < *δ* as *ε*:

In the first requirement, *W(z, t) *is the jump term which describe discontinuous evolution for all +ve non-zero distance between **x** and **z***. *Note that if *W* = 0, we restore [Eq.1] and the sample path is continuous.

In the second requirement, if* δ* limit to zero, the O(*δ*) term vanishes (i.e. continuous path), and the term A_i is often called the **drift **term because it suggest that E[d**x**] = A(**x, **t)dt, which can be interpret as the instantaneous rate of change of **z**..

In the third requirement, B(**z, **t) is often referred as the **diffusion **term, which can be seen as the instantaneous rate of change of covariance of all process close to **z**.

The full derivation will not be stated here, in fact it can be easily found on the internet by search the key words **Differential Chapman-Kolmogorov**. However, it is crucial to understand the origin of it and a brief introduction to the origin of differential Chapman-Kolmogorov equation will be written below referencing Crispin Gardiner’s book *Stochastic methods: a handbook for the natural and social sciences*.

First, we consider the time evolution of certain function *f( z) *which we required it to be twice differentiable. Then the instantaneous rate of change of expected value of this function can be written as:

[by first principle]

[by Chapman-Kolmogorov Equation]

Divide the integrals into two regions |**x-z**| > *δ* and |**x-z**| < *δ* and some manipulation, the result will be [Eq.3]:

this is the **differential Chapman-Kolmogorov equation **or sometimes called the **master equation**

If we deliberately force the master equation to disobey [Eq.1], we will obtain a discontinuous process. For example, forcing both A(**z**, t) and B(**z**, t) to be zero, the differential equation is left to be:

Using initial condition , with delta being the Dirac-Delta function, the particular solution to the master equation is:

see that if we want to know the probability of staying in **y** after Δt, we substitute **z** as **y **and get the following equation [Eq.4]

Let’s break this down terms by terms so that it is easier to understand. Firstly, W(**z**|**y**, t) is the instantaneous probability (or probability per time) of going to state **z **from **y**, therefore, W(**z**|**y**, t)Δt is the accumulated probability of going to **z **from **y **within time period Δt. Secondly, the integral inside the square bracket denotes the total probability of leaving state **y** and going to any other states **x**, thus, one minus the integral is the total probability of *staying in state*** y**. In the case of [Eq.4], W(**y**|**y**, t)Δt is the probability of going from state **y **to state **y** meaning it includes the probability of first leaving state **y** then go back to **y** within Δt, and the integral represents **ONLY** staying in state y. Finally, the sum of both terms converge to *the probability of staying in state y within Δt in *[Eq.4].

The Fokker-Planck Equation is obtained when forcing the Jump term * W* in the master equation to become zero [Eq.5]

This equation is sometimes know as the diffusion process with drift term **A**** **and diffusion matrix * B*. Based on this equation, if we wish to compute , we can neglect the partial derivatives of

and solve this equation with the constrain (i.e. only one state exist at a time) and obtain:

which is, in fact, a Gaussian distribution with variance as matrix and mean .

Crispin W.. Gardiner.

Stochastic methods: a handbook for the natural and social sciences. Springer, 2009.Hierro, Juan, and César Dopazo. “Singular boundaries in the forward Chapman-Kolmogorov differential equation.”

Journal of Statistical Physics137.2 (2009): 305-329.