Discussing Principles of Uncertainty with Jay Kadane

POU

CHANCE editors Sam Behseta and Michelle Dunn, both graduates of Carnegie Mellon University, talked with CMU professor emeritus Jay Kadane about his new book.

Jay, please tell the readers of CHANCE about your new book.

JayKadane

My new book is called Principles of Uncertainty. It starts with addressing the question, “What do we mean by probability?” That is, if we say that the probability of rain in Pittsburgh tomorrow is 30%, what have we said? How do we understand that sentence? How do we use it? What are the properties of it? How do we draw inferences? The book continues with chapters on Monte Carlo methods, hierarchical models, and problems involving multiple decisionmakers. It also has a chapter that says some skeptical things about classical statistics.

We understand that the availability of the book is a bit unorthodox—it’s free online in addition to being available for purchase in print. Please tell us what inspired you to do this, and how you were able to convince the publisher to make it free.

The book is published by Chapman and Hall and is free on my website. I wanted to make the book free because if a poor student somewhere wants to know what’s in the book, I don’t want his money—I want his mind. Rather than sign a contract before I wrote the book, first I sat down to write the book, and when I finished, I put it on the web. Then I wrote to five publishers saying, I’ve written this book. You can take a look at it on the web and see if you’d like to make an offer to publish it. And by the way, if I have to choose between it being available in print and being free on the web, I would choose to have it be free on the web. All five of the publishers that I wrote to made offers. Some proposals were to put some of the chapters on the web or to make it free after an embargo period. There were various ideas about how to handle this, but Chapman and Hall gave me the offer I wanted—namely, that the book could remain free on the web in its entirety. And so I went with them. I had a dean once who said that an academic is someone who can’t take yes for an answer; in this case, I took yes for an answer.

Who is the target audience for the book?

The way I wrote it, I would recommend a semester or two of calculus. But other than that, only an open mind is needed. The book could be read at many levels—it uses what I call just-in-time mathematics, which means I explain the math just before I use it. So, if the math is familiar to people, they can just skip that part. If it’s useful to them to read it, it’s there for them. I tried very hard to avoid the phrase “it can be shown that” because why not just show it?

It’s not intended to be in any of the usual categories of books, but rather, it’s intended to be my book that tells a story the way I think it needed to be told. So one difference I would say between this and a probability book is that a typical probability book will start with the axioms of probability. I start by asking, “why are those the axioms of probability?” What does that have to do with anything in the real world? Why do we model uncertainty with probability? There are reasons, and I try to give reasons and discuss the alternatives at the same time.

I was always the guy in the back of the room wondering why. And this book is my answer as to why. In the course of answering why, I had to come to grips with the fact that many of the things that I had been taught didn’t make any sense from a why point of view. So I had to change a lot of my thinking.

Could you give an example of some things that you were taught that no longer make sense?

My PhD is from Stanford—I came out of graduate school with a very classical dissertation, and my thought at the time about where to begin with statistical theory was Lehmann’s Testing Statistical Hypotheses book. Now, I can’t make sense of that as a way of thinking about statistics. The the way I came to that conclusion were twofold.

First, at Yale when I was working with a sociologist and a statistics graduate student, we looked at data on participation in small groups. There was a theory of how the number of utterances of the person who said the most should be related to the person who said the second most and third most and so on. It was a reasonable theory, so we tested it. And we rejected it at the 0.05 level and the 0.01 level and the 10-6 level. And then I had to ask myself if I would be more impressed if it were 10-13, and the answer was no. So then I had to think about why not, and what’s going on here? Finally I figured out a way of plotting the data, and in fact, the data weren’t very far from the hypothesis. But we had 10,000 observations. And so we were essentially holding up a magnifying glass—a square root of n magnifying glass—and from that point of view, the data were quite far from the null hypothesis. But in the domain of theories of participation rates in small groups, the hypothesis was not at all bad—it was a perfectly reasonable thing to think. So, what I found was that the method was not working well for me there. It was not helping me—it was essentially misleading me because it was saying that this null hypothesis is really terrible, when it wasn’t.

A few years later, I was working for the navy, and we had a machine that was tested in the laboratory very carefully. And then it was taken to the field. Tests were run, and the results were written up by an analyst. The analyst’s conclusion was that it’s running not significantly differently in the field than it was in the laboratory. Just before it left our organization, one of the senior scientists looked at it and said, “I have a funny feeling about this. I want a statistician to look at it.” When we looked at it, what we found was that the hypothesis test was correctly done, but there were only five observations. They cost a million dollars apiece to collect, so that’s why there were only five. In fact, the machine was working only 75% as well in the field as it was in the laboratory, but because there were only five observations, it was not significantly different. Surely what the admiral wants to know is 75% and not “it’s not significantly different.” So, what I found was that testing hypotheses in the usual way was really telling me about sample size. There are better ways of determining sample size.

So, I had to rethink—what is useful, and how do I do business based on what is useful, rather than just continue with what I’ve been taught?

How do the different flavors of Bayesian thinking address what you just mentioned as a problem with frequentist thinking?

The frequentist point of view is, “Do I reject or not reject this null hypothesis?” Bayesian analysis instead is talking about, “What is it reasonable to believe, now that we’ve seen the data?” It’s aimed at what’s true rather than what’s not true. Both use likelihoods to draw inference; Bayesians also incorporate existing knowledge into the prior distributions for the parameters of the likelihood.

I am a subjective Bayesian, which means that I believe it’s okay for prior distributions to be formed purely by reflecting informed opinion, and it’s part of my responsibility to explain the reasons why I made the choices that I did. I’m not pretending that the prior is objective, but instead I have to take on the burden of persuading my reader that the paper is worth reading.

Empirical Bayes is a hierarchical model in which the parameters at the highest (or lowest, depending on which way you orient it) level are estimated, often with maximum likelihood or another classical method. Then those values are conditioned on, to understand the parameters in the rest of the hierarchy. The problem with empirical Bayes is that it does not take into account uncertainty with respect to the parameters that are estimated with a classical method. Consequently the resulting inferences don’t reflect proper variances—they’re too certain about their estimates. I think that empirical Bayes lacks the force that the full Bayesian treatment would have.

The other major view is called objective Bayes, which claim that the priors have some objective status—I don’t think they do. One way of seeing that quickly is this example: if q has a uniform distribution, you might claim that that doesn’t say very much about q. However, you just said a whole lot about 1/q—you said that you’re almost sure that 1/q is close to zero. So yes, there is a re-parameterization such that a uniform prior with respect to that re-parameterization will do. But, then everything relies upon what re-parameterization you use—and that’s equivalent to having an informative prior. There’s nothing wrong with having a uniform prior in a bounded space if that really represents your opinion. But, you have to justify why that’s your opinion, and also think about if your readers don’t agree with you, how much deviation from uniform could happen and still get roughly the same results? And that’s useful to your readers as well because they may have their own views of the matter that you’re discussing. But, I don’t think that uniform priors have any greater attraction than any other prior per se.

What is the future of Bayesian statistics? Do you see it becoming the dominant theme in statistical sciences?

Well, I hope so. In large respect, it’s up to those of us who think that Bayesian methods are good to solve problems that otherwise would be very difficult to solve. If we can we deliver analyses that are persuasive and that tell people what they need to know, then it will continue to thrive. When people start to read the fine print of what classical methods mean—for example, that a test of significance gives you the probability, were the null hypothesis true, of data as or more extreme than the data observed—they might question why they should care about that. So as people get interested in what a method means rather than just how to compute it, I think that they will come to prefer a more meaningful analysis, a Bayesian analysis.

Much of your work has been application-driven. What problem is currently occupying your thoughts at the moment?

I’m very much into work through the American Statistical Association’s Committee on Scientific Freedom and Human Rights with respect to current events in Argentina. In Argentina, statisticians have been fined and are being threatened with criminal sanctions for publishing inflation statistics that the government doesn’t like. There was recently a criminal trial where two of the statisticians were found “not guilty.” But it’s my understanding that there are still criminal charges against several others that haven’t been heard yet. In addition, there are administrative fines of over $100,000 against some of them. It’s an open question whether the courts in Argentina are sufficiently independent of the government that they will throw these cases out.

No matter what the results of the trial, the arrests shut down the discussion of the extent to which there is inflation in Argentina. It shuts it down in Argentina—there is still a great deal of discussion of that question outside of Argentina. To my point of view and to the American Statistical Association’s point of view, there should be freedom of speech and freedom of the press. People should be able to discuss these things. The committee is trying to see what we can do to persuade the Argentine government to drop the fines and charges against the statisticians. I think we have a responsibility to do what we can to shed light on these matters and to raise our voices when wrong things are occurring.

Do you have any final advice to the readers of CHANCE, particularly students or young professionals?

Keep an open mind because you may change your mind about what’s useful. Seek out mentors from whom you think you’ll learn the most. Find professors who will help you develop your own point of view about matters, who can lead you to think of things in a fresh way. Being a statistician is so very much fun because of all the different things that we can find out about and all the different puzzles that we’re confronted with.

Back to Top

Tagged as: , , , ,