Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Peadar Coyleas
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Hamiltonian Monte-Carlo

% of readers think this story is Fact. Add your two cents.


The Hamiltonian Monte Carlo is a Metropolis method, applicable to continuous tate spaces, that makes use of gradient information to reduce random walk behaviour.
Notes based on http://python4mpia.github.io/index.html and David MacKays book on Inference and Information theory.

For many systems whose probability $P(textbb{x})$ can be written in the form

not only $-E(x)$ but also its gradient with respect to $mathbb{x}$
can be readily evaluated. It seems wasteful to use a simple random-walk
Metropolis method when this gradient is available – the gradient indicates
which direction one should go in to find states that have higher probability!

## Overview
In the Hamiltonian Monte Carlo method, the state space x is augmented by
momentum variables p, and there is an alternation of two types of proposal.
The first proposal randomize the momentum variable, leaving the state x un-changed. The second proposal changes both x and p using simulated Hamiltonian dynamics as defined by the Hamiltonian
$H(mathbf{x}, mathbf{p}) = E(mathbf{x}) + K(mathbf{p})$,
where $K(mathbf{p})$ is a ‘kinetic energy’ such as $K(mathbf{p}) = mathbf{p^{dagger}}mathbf{p} / 2.$ These two proposals are used to create
(asymptotically) samples from the joint density
$P_{H}(mathbf{x}, mathbf{p}) = frac{1}{Z_{H}} exp{[-H(mathbf{x}, mathbf{p})]} = frac{1}{Z_{H}}exp{[-E(mathbf{x})]} exp{[-K(mathbf{p})]} $

This density is separable, so the marginal distribution of $textbf{x}$ is the
desired distribution $exp{[-E(mathbf{x})]/Z$. So, simply discarding the momentum variables, we obtain a sequence of samples $lbracemathbf{x}^{(t)}rbrace$ that asymptotically come from $$P(mathbf{x})$$

### An algorithm for Hamiltonian Monte-Carlo
First, we need to compute the gradient of our objective function.

 # Define gradient of log-likelihood. def evaluateGradient(params, D, N, M_min, M_max, log_M_min, log_M_max): alpha = params[0] # extract alpha grad = logMmin*math.pow(M_min, 1.0-alpha) - logMmax*math.pow(M_max, 1.0-alpha) grad = 1.0 + grad*(1.0 - alpha)/(math.pow(M_max, 1.0-alpha) - math.pow(M_min, 1.0-alpha)) grad = -D - N*grad/(1.0 - alpha) return numpy.array(grad) 

Hamiltonian Monte-Carlo makes use of the fact, that we can write our likelihood as
$mathcal{L} = exp{log mathcal{L}} = exp{-E}$
where $$E = – logmathcal{L}$ is the “energy”. The algorithm then uses
Hamiltonian dynamics to modify the way how candidates are proposed:

 log_M_min = math.log(1.0) log_M_max = math.log(100.0) # Initial guess for alpha as array. guess = [3.0] # Prepare storing MCMC chain. A = [guess] # define stepsize of MCMC. stepsize = 0.000047 accepted = 0.0 import copy # Hamiltonian Monte-Carlo. for n in range(10000): old_alpha = A[len(A)-1] # Remember, energy = -loglik old_energy = -evaluateLogLikelihood(old_alpha, D, N, M_min, M_max) old_grad = -evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max) new_alpha = copy.copy(old_alpha) # deep copy of array new_grad = copy.copy(old_grad) # deep copy of array # Suggest new candidate using gradient + Hamiltonian dynamics. # draw random momentum vector from unit Gaussian. p = random.gauss(0.0, 1.0) H = numpy.dot(p,p)/2.0 + old_energy # compute Hamiltonian # Do 5 Leapfrog steps. for tau in range(5): # make half step in p p = p - stepsize*new_grad/2.0 # make full step in alpha new_alpha = new_alpha + stepsize*p # compute new gradient new_grad = -evaluateGradient(old_alpha, D, N, M_min, M_max, log_M_min, log_M_max) # make half step in p p = p - stepsize*new_grad/2.0 # Compute new Hamiltonian. Remember, energy = -loglik. new_energy = -evaluateLogLikelihood(new_alpha, D, N, M_min, M_max) newH = numpy.dot(p,p)/2.0 + new_energy dH = newH - H # Accept new candidate in Monte-Carlo fashion. if (dH  

Time to plot our results.

 # Discard first half of MCMC chain and thin out the rest. Clean = [] for n in range(len(A)/2,len(A)): if (n % 10 == 0): Clean.append(A[n][0]) # Print Monte-Carlo estimate of alpha. print "Mean: "+str(numpy.mean(Clean)) print "Sigma: "+str(numpy.std(Clean)) plt.figure(1) plt.hist(Clean, 20, histtype='step', lw=3) plt.xticks([2.346,2.348,2.35,2.352,2.354], [2.346,2.348,2.35,2.352,2.354]) plt.xlim(2.345,2.355) plt.xlabel(r'$alpha$', fontsize=24) plt.ylabel(r'$cal L($Data$;alpha)$', fontsize=24) plt.savefig('example-HamiltonianMC-results.png') 

The true value we used to generate the data was $$alpha = 2.35$$.
The Monte-Carlo estimate is $$hat{alpha} = 2.3507$$.


Source: https://peadarcoyle.wordpress.com/2016/01/02/hamiltonian-monte-carlo/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.