# Category Archives: Bayes

## David Spiegelhalter on `how to spot a dodgy statistic’

In this political season, it’s useful to brush up on rhetorical skills, particularly ones involving numbers and statistics, or what John Allen Paulos called numeracy. Professor David Spiegelhalter has written a guide to some of these tricks. Read the whole … Continue reading

## Newt Gingrich and Van Jones. Right on.

It’s the thing. And it addresses how media and people forget about the actual statistics, and focus on the White Hot Bright Light. A study by Gelman, Fagan, and Kiss A study by Freyer A counterpoint to the Freyer study … Continue reading

## On Smart Data

One of the things I find surprising, if not astonishing, is that in the rush to embrace Big Data, a lot of learning and statistical technique has been left apparently discarded along the way. I’m hardly the first to point … Continue reading

## Cory Lesmeister’s treatment of Simson’s Paradox (at “Fear and Loathing in Data Science”)

(Updated 2016-05-08, to provide reference for plateaus of ML functions in vicinity of MLE.) Simpson’s Paradox is one of those phenomena of data which really give Statistics a substance and a role, beyond the roles it inherits from, say, theoretical … Continue reading

## “Lucky d20” (by Tamino, with my reblogging comments)

Originally posted on Open Mind:

What with talk of killer heat waves, droughts, floods, etc. etc., this blog tends to get pretty serious. When it does, we don’t deal with happy prospects, but with the danger of worldwide catastrophe. But…

## p-values and hypothesis tests: the Bayesian(s) rule

The American Statistical Association of which I am a longtime member issued an important statement today which will hopefully move statistical practice in engineering and especially in the sciences away from the misleading practice of using p-values and hypothesis tests. … Continue reading

## high dimension Metropolis-Hastings algorithms

If attempting to simulate from a multivariate standard normal distribution in a large dimension, when starting from the mode of the target, i.e., its mean γ, leaving the mode γis extremely unlikely, given the huge drop between the value of the density at the mode γ and at likely realisations Continue reading

## Generating supports for classification rules in black box regression models

Inspired by the extensive and excellent work in approximate Bayesian computation (see also), especially that done by Professors Christian Robert and colleagues (see also), and Professor Simon Wood (see also), it occurred to me that the complaints regarding lack of … Continue reading

## R and “big data”

On 2nd November 2015, Wes McKinney, the developer of the highly useful Python pandas module (and other things, including books), wrote an amusing blog post, “The problem with the data science language wars“. I by no means disagree with him. … Continue reading

## On differential localization of tumors using relative concentrations of ctDNA. Part 2.

Part 1 of this series introduced the idea of ctDNA and its use for detecting cancers or their resurgence, and proposed a scheme whereby relative concentrations of ctDNA at two or more sites after controlled disturbance might be used to … Continue reading

## Deep Recurrent Learning Networks

(Also known to statisticians as deep exponential families.) Large scale deep learning Four easy lessons on Deep Learning from Google

## Comprehensive and compact tutorial on Petris’ DLM package in R; with an update about Helske’s KFAS

A blogger named Lalas produced on Quantitative Thoughts a very comprehensive and compact tutorial on the R package dlm by Petris. I use dlm a lot. Unfortunately, Lalas does not give details on how the SVD is used. They do … Continue reading

## “Cauchy Distribution: Evil or Angel?” (from Xian)

Cauchy Distribution: Evil or Angel?. From Professor Christian Robert.

## “… the most patronizing start to an answer I have ever received …”

Professor Christian Robert tries to help out a student of MCMC on Cross Validated and earns the comment that his help had “the most patronizing start to an answer I have ever received“. I learned a new term: primitivus petitor.

## “A vignette on Metropolis” (Christian Robert)

Originally posted on Xi'an's Og:

Over the past week, I wrote a short introduction to the Metropolis-Hastings algorithm, mostly in the style of our Introduction to Monte Carlo with R book, that is, with very little theory and…

## “Unbiased Bayes for Big Data: Path of partial posteriors” (Christian Robert)

Unbiased Bayes for Big Data: Path of partial posteriors.

## Markov Chain Monte Carlo methods and logistic regression

This post could also be subtitled “Residual deviance isn’t the whole story.” My favorite book on logistic regression is by Dr Joseph Hilbe, Logistic Regression Models, CRC Press, 2009, Chapman & Hill. It is a solidly frequentist text, but its … Continue reading

## Bayesian change-point analysis for global temperatures, 1850-2010

Professor Peter Congdon reports on two Bayesian models for global temperature shifts in his textbook, Applied Bayesian Modelling, as “Example 6.12: Global temperatures, 1850-2010”, on pages 252-253. A direct link is available online. The first is apparently original with Congdon, … Continue reading

## “Big Data is the new Phrenology”

From mathbabe: Big Data is the new phrenology. Excerpt: Here’s the thing. What we’ve got is a new kind of awful pseudo-science, which replaces measurements of skulls with big data. There’s no reason to think this stuff is any less … Continue reading

## R vs Python: Practical Data Analysis

R vs Python: Practical Data Analysis (Nonlinear Regression).

## Christian Robert on the amazing Gibbs sampler

Professor Christian Robert remarks on the amazing Gibbs sampler. Implicitly he’s also underscoring the power of properly done Bayesian computational analysis. For here we have a problem with a posterior distribution having two strong modes, so a point estimate, like … Continue reading

## Christian Robert on Alan Turing

Alan Turing Institute. See Professor Robert’s earlier post on Turing, too.

## Richard Muller: “I Was Wrong On Global Warming, But It Didn’t Convince The ‘Sceptics'”

Update. 26th February 2015 This is not directly related to the BEST project described in the YouTube video above, but the Berkeley National Laboratory has experimentally linked increases in radiative forcing with increases in atmospheric concentrations of CO2 due to … Continue reading

## Naomi Oreskes and significance testing

Naomi Oreskes has an op-ed in The New York Times today, which intends to defend the severe standards of evidence scientists employ, with special applicability to climate science and their explanation of causation (greenhouse gases produce radiative forcing), attribution (most … Continue reading

## On nested equivalence classes of climate models, ordered by computational complexity

I’m digging into the internals of ABC, for professional and scientific reasons. I’ve linked a great tutorial elsewhere, and argued that this framework, advanced by Wood, and Wilkinson (Robert), and Wilkinson (Darren), and Hartig and colleagues, and Robert and colleagues, … Continue reading

## “[W]e want to model the process as we would simulate it.”

Professor Darren Wilkinson offers a pithy insight on how to go about constructing statistical models, notably hierarchical ones: “… we want to model the process as we would simulate it ….” This appears in his blog post One-way ANOVA with … Continue reading

## struggling with problems already partly solved by others

Climate modelers and models see as their frontier the problem of dealing with spontaneous dynamics in systems such as atmosphere or ocean which are not directly forced by boundary conditions such as radiative forcing due to increased greenhouse gas (“GHG”) … Continue reading

## illustrating particle filters and Bayesian fusion using successive location estimates on the unit circle

Introduction Modern treatments of Bayesian integration to obtain posterior densities often use some form of Markov Chain Monte Carlo (“MCMC”), typically Gibbs sampling. Gibbs works well with many Bayesian hierarchical models. The standard problem-solving situation with these is that a … Continue reading