Hamiltonian Monte-Carlo
The Hamiltonian Monte Carlo is a Metropolis method, applicable to continuous tate spaces, that makes use of gradient information to reduce random walk behaviour.
Notes based on http://python4mpia.github.io/index.html and David MacKays book on Inference and Information theory.
For many systems whose probability $P(textbb{x})$ can be written in the form
not only $-E(x)$ but also its gradient with respect to $mathbb{x}$
can be readily evaluated. It seems wasteful to use a simple random-walk
Metropolis method when this gradient is available – the gradient indicates
which direction one should go in to find states that have higher probability!
## Overview
In the Hamiltonian Monte Carlo method, the state space x is augmented by
momentum variables p, and there is an alternation of two types of proposal.
The first proposal randomize the momentum variable, leaving the state x un-changed. The second proposal changes both x and p using simulated Hamiltonian dynamics as defined by the Hamiltonian
$H(mathbf{x}, mathbf{p}) = E(mathbf{x}) + K(mathbf{p})$,
where $K(mathbf{p})$ is a ‘kinetic energy’ such as $K(mathbf{p}) = mathbf{p^{dagger}}mathbf{p} / 2.$ These two proposals are used to create
(asymptotically) samples from the joint density
$P_{H}(mathbf{x}, mathbf{p}) = frac{1}{Z_{H}} exp{[-H(mathbf{x}, mathbf{p})]} = frac{1}{Z_{H}}exp{[-E(mathbf{x})]} exp{[-K(mathbf{p})]} $
This density is separable, so the marginal distribution of $textbf{x}$ is the
desired distribution $exp{[-E(mathbf{x})]/Z$. So, simply discarding the momentum variables, we obtain a sequence of samples $lbracemathbf{x}^{(t)}rbrace$ that asymptotically come from $$P(mathbf{x})$$
### An algorithm for Hamiltonian Monte-Carlo
First, we need to compute the gradient of our objective function.
# Define gradient of log-likelihood. def evaluateGradient(params, D, N, M_min, M_max, log_M_min, log_M_max): alpha = params[0] # extract alpha grad = logMmin*math.pow(M_min, 1.0-alpha) - logMmax*math.pow(M_max, 1.0-alpha) grad = 1.0 + grad*(1.0 - alpha)/(math.pow(M_max, 1.0-alpha) - math.pow(M_min, 1.0-alpha)) grad = -D - N*grad/(1.0 - alpha) return numpy.array(grad)
Hamiltonian Monte-Carlo makes use of the fact, that we can write our likelihood as
$mathcal{L} = exp{log mathcal{L}} = exp{-E}$
where $$E = – logmathcal{L}$ is the “energy”. The algorithm then uses
Hamiltonian dynamics to modify the way how candidates are proposed:
log_M_min = math.log(1.0) log_M_max = math.log(100.0) # Initial guess for alpha as array. guess = [3.0] # Prepare storing MCMC chain. A = [guess] # define stepsize of MCMC. stepsize = 0.000047 accepted = 0.0 import copy # Hamiltonian Monte-Carlo. for n in range(10000): old_alpha = A[len(A)-1] # Remember, energy = -loglik old_energy = -evaluateLogLikelihood(old_alpha, D, N, M_min, M_max) old_grad = -evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max) new_alpha = copy.copy(old_alpha) # deep copy of array new_grad = copy.copy(old_grad) # deep copy of array # Suggest new candidate using gradient + Hamiltonian dynamics. # draw random momentum vector from unit Gaussian. p = random.gauss(0.0, 1.0) H = numpy.dot(p,p)/2.0 + old_energy # compute Hamiltonian # Do 5 Leapfrog steps. for tau in range(5): # make half step in p p = p - stepsize*new_grad/2.0 # make full step in alpha new_alpha = new_alpha + stepsize*p # compute new gradient new_grad = -evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max) # make half step in p p = p - stepsize*new_grad/2.0 # Compute new Hamiltonian. Remember, energy = -loglik. new_energy = -evaluateLogLikelihood(new_alpha, D, N, M_min, M_max) newH = numpy.dot(p,p)/2.0 + new_energy dH = newH - H # Accept new candidate in Monte-Carlo fashion. if (dHTime to plot our results.
# Discard first half of MCMC chain and thin out the rest. Clean = [] for n in range(len(A)/2,len(A)): if (n % 10 == 0): Clean.append(A[n][0]) # Print Monte-Carlo estimate of alpha. print "Mean: "+str(numpy.mean(Clean)) print "Sigma: "+str(numpy.std(Clean)) plt.figure(1) plt.hist(Clean, 20, histtype='step', lw=3) plt.xticks([2.346,2.348,2.35,2.352,2.354], [2.346,2.348,2.35,2.352,2.354]) plt.xlim(2.345,2.355) plt.xlabel(r'$alpha$', fontsize=24) plt.ylabel(r'$cal L($Data$;alpha)$', fontsize=24) plt.savefig('example-HamiltonianMC-results.png')
The true value we used to generate the data was $$alpha = 2.35$$.
The Monte-Carlo estimate is $$hat{alpha} = 2.3507$$.
Source: https://peadarcoyle.wordpress.com/2016/01/02/hamiltonian-monte-carlo/
Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.
"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.
Please Help Support BeforeitsNews by trying our Natural Health Products below!
Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST
Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST
Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST
Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!
HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.
Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.
MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)
Oxy Powder - Natural Colon Cleanser! Cleans out toxic buildup with oxygen!
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.
Smart Meter Cover - Reduces Smart Meter radiation by 96%! (See Video).