Editor’s Letter – Vol. 25, No. 1

Dear Readers,

Welcome to the inaugural edition of CHANCE in color.

I cannot think of more than a handful of books that somehow deal with the history of Bayesian statistics. In a wider context, there are the two masterpieces by Stephen Stigler—The History of Statistics: The Measurement of Uncertainty Before 1900 and Statistics on the Table: The History of Statistical Concepts and Methods—setting the standard for any attempt in statistical historiography. Stigler’s illuminating chapter on Bayes’ essay in the former is a must read for anyone with a genuine curiosity about the early foundations of what is now labeled Bayesian statistics. Then, there is Stephen Fienberg’s remarkable monograph—When Did Bayesian Inference Become ‘Bayesian?’—and Andrew Dale’s two grandiose undertakings: Most Honorable Remembrance: The Life and Work of Thomas Bayes and A History of Inverse Probability: From Thomas Bayes to Karl Pearson, among others.

What gives Sharon McGrayne’s recent book, The Theory That Would Not Die, a unique flavor? In my view, her thorough research of the subject matter coupled with flowing prose, an impressive set of interviews with Bayesian statisticians, and an extremely engaging style in telling the personal stories of the few nonconformist heroes of the Bayesian school. Perhaps more than others, the editors of CHANCE, would garner an appreciation for the unforeseeable challenges of conveying statistical concepts through nontechnical narratives. It is in this context that most of us are highly impressed with McGrayne’s effort.

Understandably, the first edition of the book comes with nuances. It opens with a crescendo by—deservedly—shifting the focus from Bayes to Laplace; one can see the contrarian in Laplace paving the way for establishing inverse probability in the body of the mathematical apparatus. However, it is in keeping up with the fast tempo of recent developments in the Bayesian stratosphere that McGrayne’s book finally runs out of steam. And who would blame her? Try explaining the machinery behind the reversible jump MCMC to a lay audience.

In recognition of McGrayne’s endeavor, we devote two pieces in this edition of CHANCE to The Theory That Would Not Die: an extensive interview with the author spearheaded by Michelle Dunn and accompanied by a team of our editors and a book review by Christian Robert.

This issue of CHANCE is amplified with the first installment of two new columns. In The Big Picture, Nicole Lazar discusses the many facets of handling, analyzing, and making sense of complex and large data sets. As Lazar suggests, the list of relevant topics are aplenty and not limited to the nontrivial challenges in conducting a huge number of tests, visualizing large data, the familiar large p small n problem, and the computational demands of dealing with huge data sets. In Taking a Chance in the Classroom, column editors Dalene Stangl, Mine Cetinkaya-Rundel, and Kari Lock Morgan will focus on pedagogical approaches to communicating the fundamental ideas of statistical thinking in a classroom using data sets from CHANCE and elsewhere.

Also in this issue, Erik Heiny and Robert Heiny examine the effects of driving distance and driving accuracy in predicting the scoring average in golf, using a data set collected from the 2000–2010 PGA tour. In a one-of-a-kind analysis, the authors not only employ a linear mixed effects modeling approach for capturing the longitudinal nature of the repeated measures, but develop a metric for comparing the driving ability of the players at the individual level.

Tatsuki Koyama gives us an impressive graphical presentation of the length of the entire repertoire of The Beatles’ studio recordings since the release of their first double-track, Love Me Do and P.S. I Love You, to the songs included in their final album, Let It Be. More than anything else, this is yet another reflection of boundless possibilities in visualizing data.

In O Privacy, Where Art Thou? column editor Alexandra Slavkovic treats her readers to an article by Jerome Reiter, past chair of the ASA’s Committee on Privacy and Confidentiality. The theme of Reiter’s article is the issues that could easily arise from manipulating data to avoid fissures in confidentiality. Reiter’s insight on matters related to statistical disclosure limitation is indispensable.

In a highly engaging A Statistician Reads the Sports Pages article, Andrew Thomas proposes a “fixed latent tile order” mechanism for eliminating luck associated with the random draw of tiles in the game of Scrabble. The very aim of Thomas’ same-tile-order rule is to limit the uncertainty of Scrabble, mostly to its players’ skills.

In Visual Revelations, Howard Wainer introduces us to a set of remarkable maps created by Joseph Fletcher, outlining sometimes vaguely defined variables such as ignorance and crime in mid-19th-century England. As Wainer observes, Fletcher had an inferential goal in mind: Crime is proportional to ignorance.

Andrew Gelman and Eric Loken look into a rather sensitive angle involved with the ethical issues of teaching statistics. The authors put forward a coherent argument as to why many statistics instructors end up ignoring such foundational ideas as randomization and controlled experimentation when assessing students’ progress.

Christian Robert reviews four books: The Theory That Would Not Die, The Cult of Significance, Handbook of Markov Chain Monte Carlo, and Handbook of Fitting Statistical Distributions with R.

Finally, Jonathan Berkowitz greets his readers with an extra dose of “mosaicquote” puzzle.

Sam Behseta

Back to Top

Tagged as: