# Category Archives: Gibbs Sampling

## Reanalysis of business visits from deployments of a mobile phone app

Updated, 20th October 2020 This reports a reanalysis of data from the deployment of a mobile phone app, as reported in: M. Yauck, L.-P. Rivest, G. Rothman, “Capture-recapture methods for data on the activation of applications on mobile phones“, Journal … Continue reading

## Sampling: Rejection, Reservoir, and Slice

An article by Suilou Huang for catatrophe modeler AIR-WorldWide of Boston about rejection sampling in CAT modeling got me thinking about pulling together some notes about sampling algorithms of various kinds. There are, of course, books written about this subject, … Continue reading

## “Grid shading by simulated annealing” [Martyn Plummer]

Source: Grid shading by simulated annealing (or what I did on my holidays), aka “fun with GCHQ job adverts”, by Martyn Plummer, developer of JAGS. Excerpt: I wanted to solve the puzzle but did not want to sit down with … Continue reading

## high dimension Metropolis-Hastings algorithms

If attempting to simulate from a multivariate standard normal distribution in a large dimension, when starting from the mode of the target, i.e., its mean γ, leaving the mode γis extremely unlikely, given the huge drop between the value of the density at the mode γ and at likely realisations Continue reading

## “A vignette on Metropolis” (Christian Robert)

Originally posted on Xi'an's Og:
Over the past week, I wrote a short introduction to the Metropolis-Hastings algorithm, mostly in the style of our Introduction to Monte Carlo with R book, that is, with very little theory and…

## Christian Robert on the amazing Gibbs sampler

Professor Christian Robert remarks on the amazing Gibbs sampler. Implicitly he’s also underscoring the power of properly done Bayesian computational analysis. For here we have a problem with a posterior distribution having two strong modes, so a point estimate, like … Continue reading

## example of Bayesian inversion

This is based upon my solution of Exercise 2.3, page 18, R. Christensen, W. Johnson, A. Branscum, T. E. Hanson, Bayesian Ideas and Data Analysis, Chapman & Hall, 2011. The purpose is to show how information latent in a set … Continue reading

## Bayesian deconvolution of stick lengths

Consider trying to determine the length of a straight stick. Instead of the measurement errors being clustered about zero, suppose the errors are known to be always positive, that is, no measurement ever underestimates the length of the stick. Such … Continue reading

## The dp-means algorithm of Kulis and Jordan in R and Python

dp-means algorithm. Think k-means but with the number of clusters calculated. By John Myles White, in R. (Github link off that page.) By Scott Hendrickson, in Python. (Github link off that page.)

## Blind Bayesian recovery of components of residential solid waste tonnage from totals data

This is a sketch of how maths and statistics can do something called blind source separation, meaning to estimate the components of data given only their totals. Here, I use Bayesian techniques for the purpose, sometimes called Bayesian inversion, using … Continue reading

## “The joy and martyrdom of trying to be a Bayesian”

Bayesians have all been there. Some of us don’t depend upon producing publications to assure our pay, so we less have the pressure of pleasing peer reviewers. Nonetheless, it’s all reacting to “What the hell are you doing? I don’t … Continue reading

## How fast is JAGS?

How fast is JAGS?.

## The zero-crossings trick for JAGS: Finding roots stochastically

BUGS has a “zeros trick” (Lund, Jackson, Best, Thomas, Spiegelhalter, 2013, pages 204-206; see also an online illustration) for specifying a new distribution which is not in the standard set. The idea is to couple an invented-for-the-moment Poisson density to … Continue reading