“Lucky d20” (by Tamino, with my reblogging comments)

Careful consideration to really basic things like this is, for me, incredibly refreshing, and helps with the self-discipline needed to deal with real-world problems, those often being messy and having distracting entanglements.

A couple of thoughts:

  • I think the mechanism for automatically rolling and recording the results of rolls is pretty slick. Of course the perennial doubter in me wonders if for any given rolling hardware there might be a bias introduced by the hardware, and not the dice. This could be checked in a couple of ways. One, design and build a completely different system for rolling and checking dice, and repeat the experiment, comparing results. Two, roll sets of dice, and see if the sequence of rolls show any long term albeit weak temporal dependencies both for a single die and then across dice.
  • To what degree does a machine implementation of rolling dice mimick what players do when rolling for D&D? People tend to be bad generators of randomness, and I’ve sometimes wondered if the rolling done by hand for ordinary dice or d20 randomizes these enough. Casinos tend to use machines to randomize, even when rolling dice. This is important because results as in the article may not apply well to the casual D&D game unless there’s a mechanical roller. Anyone know if in high stakes D&D games they use mechanical rollers?
  • I wonder if there may not be more efficient ways of detecting discrepancies between a die and uniformity, or between two dice than rolling 8300 times. In particular, I wonder if a sequential updating scheme using a Dirichlet-Multinomial model might not help here, and get us to significance sooner than 8300, something which is attempting to model the relative frequency counting ideal.
  • There are ways in which this problem could be modified that would help it be a toy world for training people in data science. For example, suppose there were a million rolls, but some of the time the value produced on the roll was not available? Or suppose it was constrained to be to a small proper subset of the 20 sides? Or suppose there were a million rolls of a thousand dice? Or suppose the objective was to simulate a million rolls of a thousand dice? Like the socks of Karl Broman, this could be the basis for a neat teaching case.

Open Mind

What with talk of killer heat waves, droughts, floods, etc. etc., this blog tends to get pretty serious. When it does, we don’t deal with happy prospects, but with the danger of worldwide catastrophe. But every now and then we need to “lighten up,” so let’s have a little fun.

Recently a reader comment pointed to a website reporting the results of testing dice for fairness. Specifically, it tested the “d20” or 20-sided die. It’s a die often used in tabletop games, especially D&D (Dungeons & Dragons). That site links to yet another site which tests dice (specifically, the d20). They make enough of their data available for us to take a close look.

View original post 1,099 more words

About hypergeometric

See http://www.linkedin.com/in/deepdevelopment/ and http://667-per-cm.net
This entry was posted in Bayes, Bayesian, card decks, card draws, card games, chance, D&D, Dungeons and Dragons, games of chance, mathematics, maths, Monte Carlo Statistical Methods, probability, statistical dependence, statistics, stochastic algorithms, stochastics, Wizards of the Coast. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s