Book Reviews 26.3

This series of reviews contains two written by local PhD students (Pierre Jacob, Merlin Keller, and Jean-Bernard Salomon) of textbooks. The other two reviews were written by Marc Hoffmann, my colleague here at Paris-Dauphine, focusing on stochastic processes books.

Probability and Statistics for Engineers and Scientists (9th Edition)

R.E. Walpole, R.H. Myers, S.L. Myers, and K.E. Ye

Probablity & Statistics for Engineers and Scientists

Hardcover: 816 pages

Publisher: Pearson

Language: English

ISBN-13: 978-0321629111

In this ninth edition of the book, I was not expecting the most innovative approach of the field, but a perfected, polished document intended for a scientific audience.

Unlike the title suggests the book is only about statistics and not about probability. Some notions of probability, which are necessary for statistics, are introduced: random variables, expectations, and the compulsory list of parametric distributions. There is no mention of measure theory, and most of the explanations are given for real-valued, univariate, or bivariate random variables. No proof is given, except for the simplest lemma.

The choice of not relying on mathematics might succeed in making the material accessible to a general audience. However, the intended audience of scientists and engineers, who are supposed to be familiar with basic mathematics, might regret the lack of formal statements, which makes some parts of the book sloppy. For instance, when the sample variance is introduced (Chapter 1, Page 15), the notion of expectation is not yet mentioned. Hence, the authors cannot justify the 1/(n − 1) normalization factor by the unbiasedness property. Instead, they provide an evasive explanation involving degrees of freedom.

Later (Chapter 9), the explanation changes and the 1/(n − 1) factor in the sample variance formula is justified by the unbiased property, introduced just before. These contradictory explanations seem inappropriate, especially for the ninth edition of a book!

On a more general level, the writing is clear, if verbose, and the various notions are illustrated with numerous examples and exercises, all of which are easy. Most exercises actually consist of applying a formula, as if the intended audience was not assumed to be familiar with equations and variables.

One cannot but notice that, with nearly 800 pages, the contents of the book prove to be light and do not prepare the reader for a more advanced level of statistics. The notion of maximum likelihood estimation is briefly mentioned as an optional section (§9.14), while no connection is made between MLE and least squares estimation. Nonparametric statistics is reduced to rank tests in Chapter 16. The final chapter on Bayesian statistics is incomplete to the point of being misleading, with prior distributions only seen as devices used to include expert knowledge in the inference.

A final point is that the authors chose the Minitab and SAS computer languages to illustrate some of the notions they introduce. The first language is somehow outdated, while the second is highly specific (and particularly pricey). I am thus unclear as to which audience could benefit from reading this book.

—Pierre Jacob, NUS

 

Probability and Statistics (Fourth Edition)

Morris DeGroot and Mark Schervish

Probability and Statistics (Fourth Edition)

Hardcover: 912 pages

Publisher: Pearson

Language: English

ISBN-13: 978-0321500465

This book covers a complete course of statistics at the undergraduate level, and thus requires minimal background in mathematics. However, unlike many other textbooks, the authors do not put aside all the theoretical difficulties of probability and statistics and take the side of explaining them in a simple way, relying on many examples.

The author first introduces basic notions of probability needed in statistics. The first chapter presents the concepts of randomness and probability and gives insight into measure theory that is, in my view, required to have a coherent approach to probability. All the main probabilistic tools are introduced, including conditional probability, random variables, and expectation. These notions are introduced in a rigorous way and well explained, which makes them simple to understand. However, the first six chapters cannot make a complete course, and the authors have placed beyond the scope of this book some essential notions of probability theory to keep the material accessible. For instance, the notion of almost sure convergence that does not make sense without a more involved background on measure theory. Nonetheless, these probabilistic tools are more than enough to introduce the main concepts of statistics presented in the second part of the book.

If not an advanced course on probability, this book is a fairly complete course on statistics. Presenting both the frequentist and Bayesian approaches to statistics allows a straightforward introduction of the notions of uncertainty (on the parameter) and statistical decisionmaking. The book covers all the usual statistical methods, from estimation to linear regression and through tests and simulations.

Looking at the wider picture, the course is complete and self sufficient for undergraduate students. It is well written and makes accessible some difficult concepts. The exercises are adapted to an undergraduate level and properly complete the course.

—Jean-Bernard Saloman, CREST

Statistics for the Life Sciences (4th edition)

M.L. Samuels, J.A. Witmer, and A. Schaffner

Hardcover: 672 pages

Publisher: Pearson

Language: English

ISBN-13: 978-0321652805

As suggested by its title, Statistics for the Life Sciences (SLS) is an introduction to statistics intended primarily for students and researchers in the life sciences. The main objectives pursued by the authors are clearly stated in the preface: to provide the reader with the basic statistical concepts and methods necessary to plan a scientific experiment and analyze the data and interpret the results. It also warns about common statistical traps and pitfalls. Furthermore, the only prerequisite is that the reader is familiar with basic algebra.

By writing this review, I have tried to assess how, and to what degree, the authors achieved such an ambitious program. I must say that this book is indeed a remarkable achievement in that it manages to introduce fundamental statistical concepts with a high level of generality and scientific rigor, while keeping the text accessible to readers unfamiliar with the domain.

One of the key ingredients of this success lies in the careful choice of the ideas introduced, as well as the order in which they are presented. Another related ingredient is the use of examples from real-life data sets (as advertised in the preface) to explain new concepts in an informal, easy-to understand way before formalizing them. The whole book is based on this clever “bottom-up” approach and uses as a starting point the practical issues life-science researchers face in their everyday work. These are illustrated in numerous examples from the biostatistics literature, allowing the authors to introduce statistical concepts and tools by showing how they can help solve real-life problems and warn the reader against common misuses of these methods or misinterpretations of the results they produce.

The introduction starts with historical examples of clinical trials, such as that conducted by Pasteur to demonstrate the usefulness of his vaccine. These introduce in an intuitive and informal way the related concepts of randomness and uncertainty concerning a scientific hypothesis. Hypothesis testing is also made to appear quite naturally from these examples, under the form of what turns out to be a permutation (or randomization) test, though at this stage it is described in an informal way.

The main subject of this chapter is sample variability and how it relates to the interest quantities at the population level that the researcher wishes to infer. Statistics is presented as “the science of extracting the most information possible from the available data, and quantifying its reliability” (more or less in these words). It begins with the planning and design of the experiment, whose consequences on the outcome of the analysis are thoroughly discussed. The issues of confounding and sample bias are introduced through examples, and random sampling is presented as a “gold standard” in scientific investigation, in that it ensures that the sample is representative in a certain sense of the target population of interest.

Later, the authors assert that “quantification of [the sampling error, due to random sampling from a population of interest in an experiment] is a major contribution that statistical theory has made to scientific thinking.” This is a frequentist-oriented way to present statistics, in that repeated sampling from a hypothesized population of interest is considered the basis of statistical analysis. Though other points of view are possible, I agree this one is reasonable when applied to life sciences, where problems are often (but not always) stated in terms of inferring population characteristics from observations made on randomly chosen individuals.

Quite logically, the second chapter deals with descriptive statistics and graphical representations: once an experiment has been planned and the data collected, the next step consists in verifying the quality of the data and studying its most salient features. Thus, measures of position and dispersion are reviewed, as well as basic representations of the data distribution. An interesting discussion on outliers is given, wherein they are given a formal definition (as falling beyond the “whiskers” of the boxplot representing the data distribution), and the reader is warned against the temptation of removing them from the data set without any good reason, which can lead to biased results.

Finally, statistical inference is presented as “the process of drawing conclusions about a population, based on observations from that population,” again a clearly frequentist point of view, and the distinction between statistic (“a sample characteristic”) and parameter (“a population characteristic”) is made.

The third chapter paves the way to the next step of the analysis, which is statistical modeling, by introducing basic probability theory as a tool to quantify data variability. The central message here lies in the “frequency interpretation” of the probability of an event E as “the frequency of occurrence of E in an indefinitely long series of repetitions of the chance operation.” The alternative, “subjectivist” interpretation is briefly mentioned, as “expressing a person’s degree of belief that the event will happen,” but is declared outside the scope of the book. The authors do not discuss the philosophical difficulties that arise from assuming that a given random experiment can be repeated indefinitely many times, and under identical conditions. However, they do address the related problem that comes from assuming that the data are a random sample from an underlying interest population. Indeed, because in the real world all populations are finite, in principle we should only deal with population distributions having finite support, even if the observed variable is considered continuous. The use of continuous density curves to model population distributions is justified by the authors, which consider visualizing “the curve as an idealization of a relative frequency histogram with very narrow classes.”

Chapters four to six conclude the presentation of basic statistical theory. The fourth chapter is dedicated to the normal model and normality tests for a given sample. On a more general level, the fifth chapter introduces the notion of sampling distribution, as the distribution followed by a certain statistic of a random sample. To make this notion clearer, a convenient device is introduced, called the “meta-study.” It is defined as a collection of “indefinitely many replications of the same study.” Any given test statistic is replicated indefinitely many times in a meta-study, and thus has indefinitely many possible values. The distribution of these values constitutes its sampling distribution. The meta-study turns out to be a very powerful image that is used throughout the book, to illustrate the meaning of frequentist quantities such as confidence intervals and p-values. The central limit theorem is then introduced as the limiting sampling distribution of the mean, and the corresponding asymptotic confidence intervals are reviewed. A thorough warning is issued against using these intervals if the iid assumption does not hold, or if the sample size is too small (n < 20) and the population distribution non-normal.

I conclude this brief description of the contents and structure of the book by presenting the seventh chapter, which is central in that it gives a very complete presentation all the fundamental concepts of hypothesis testing in the context of the comparison of two samples. Hypothesis tests are certainly among the statistical procedures that are the most difficult to interpret, and which can lead to serious misconceptions. Here the authors make a remarkable work of warning the reader against common pitfalls. The most widespread error is to accept the null hypothesis as true when the p-value is large. The correct interpretation of a nonsignificant p-value is discussed, and summed up in a quote of Carl Sagan: “The absence of evidence is not the evidence of absence.”

Less common issues also are investigated. For instance, an enlightening discussion of the difference between association (i.e., statistical dependence) and causality is given, as well as on the distinction between statistical significance of an effect and the effect’s size. As in previous chapters, the Bayesian point of view is presented as an alternative, outside the scope of this book, that enables one to, for instance, actually compute the probability that the null hypotheses is true, or to incorporate into the study the information available from previous, related studies.

The remaining chapters present various test and estimation procedures, tailored to specific types of data and/or models, including categorical data, ANOVA, and linear regression.

Overall, the ideas presented throughout the book are organized so as to follow the natural order in which questions arise when conducting an experiment. At the same time, there is a gradient in the level of abstraction, as concepts become gradually more formalized one chapter after the other. Thus, the reader is led from initial questions concerning life sciences to the general concepts of statistical modeling and inference in a progressive and efficient way.

This accessibility and clarity come at a price, however, since the profuse illustrations and explanations necessarily take a lot of space and limit the amount of ideas and methods that can be introduced. The authors have clearly made the choice of quality over quantity (although the book is more than 600 pages and weighs almost three pounds) and have limited the scope of their work to simple and widely applicable methods. In this way, they have plenty of space to insist on the correct way to use them, and on how to interpret results in a scientifically sound way. One can only agree with this choice, as statistics are in growing need in many applied fields (not limited to life sciences), where they are often used and misused by nonspecialists due to lack of sufficient training.

My main regret concerning the choices made by the authors is that they only present the classical, or frequentist, point of view. It is not my intention here to rekindle the old, and in my opinion outdated, quarrel between Bayesian vs. frequentist statistics; on the contrary, I am convinced that both approaches are valuable and should have their place in the basic training of any statistician. This is especially true in life sciences, where Bayesian approaches are increasingly used to analyze data in such diverse fields as genomics, ecotoxicology, and probabilistic seismic assessment, to cite just a few.

Furthermore, on at least one occasion, the authors advocate the use of a method that, in my opinion, is essentially Bayesian. In the ninth chapter, devoted to categorical data, estimation of the proportion of the interest population with a certain attribute of interest is discussed. The sample proportion is presented as a natural estimate. But an alternative estimate, called the “Wilson-adjusted sample proportion” is then introduced, which is “equivalent to computing the ordinary sample proportion on an augmented sample: one that includes four extra observations (…), two [that are part of the sub-population with the attribute of interest] (…) and two that are not.” While this has the effect of “biasing the estimate towards the value 1/2.” it is attractive in that confidence intervals based on this biased estimate are “actually more reliable than those based on [the sample proportion].”

This Wilson-adjusted estimate is then used throughout the chapter as the reference method for estimating proportions. It can easily be checked that it is equal to the posterior mean of the population proportion, based on a Beta prior, given that the number of observations with the attribute of interest follows a binomial distribution. The resulting bias in the estimate is a well-known property of Bayesian estimates, also known as the “shrinking effect,” and it is also well-known that Bayesian credible intervals often have interesting frequentist coverage properties.

Recognizing the Bayesian nature of this estimate has practical advantages. For instance, it allows to shrink the estimate toward other a priori proportion values than 1/2, and to vary the amount of shrinkage by adjusting the number of extra observations.

Finally, another aspect that, in my opinion, deserved further attention concerns the references to other textbooks or resources for the reader interested in further statistical training. Indeed, SLS is completely self-contained, which means it contains in itself enough material to address a wide range of statistical problems. This is an impressive achievement, but on the downside, it does not seem to propose any perspective for readers eager to deepen their knowledge of statistics. Especially frustrating are the mentions of certain subjects that are outside the scope of this book, without any suggestion of an appropriate reference for those eager to learn more. Such subjects include maximum likelihood estimation, Bayesian statistics, and specific models.

In conclusion, I was thrilled by how the authors of Statistics for the Life Sciences managed to introduce in an easily accessible way many subtle statistical notions, such as the distinction between statistics and parameters, or the meaning of a p-value in a test of statistical hypotheses. I think it is as much an excellent introduction to statistical thinking as a rich source of inspiration to teachers of statistics, or even to applied statisticians, whose work requires the ability to make complex statistical methodology understandable by nonspecialists.

—Merlin Keller, EDF

Stationary Stochastic Processes

Georg Lindgren

Stationary Stochastic Processes

Paperback: 375 pages

Publisher: Chapman and Hall/CRC

Language: English

ISBN-13: 978-1466557796

Stationary processes lie at the intersection of stochastic processes in probability theory, times series in econometrics, and applied harmonic analysis in engineering and signal processing. Several equivalent formulations exist to depict the same objects, using either deterministic or stochastic tools and discrete or continuous time formulations. A common ground is Fourier analysis and so-called second-order properties.

The book by Georg Lindgren on stationary processes aims at presenting the whole subject in a condensed and unified way, at a relatively elementary level, trying to combine a rigorous mathematical treatment of most parts of the subject, while keeping up with powerful engineering-like heuristics. While Stationary Processes covers a classical topic that lay its roots in the 1950s–1960s, some emphasis is put on recent applications inspired by the advance of measurement techniques. Lindgren’s attempt results in a fairly original result, with some limitations that seem intrinsic to its quite ambitious initial goal.

A first chapter is devoted to probability essentials with emphasis on the intrinsic difficulty of manipulating random processes (like considering appropriate σ-fields on a product space and Kolmogorov’s extension theorem) and imposes the convenient point of view (adopted throughout the book) of considering second-order continuous times random processes with time indexed by the whole real line. The second-order characteristics are introduced.

Chapter 2 focuses on sample path properties of random processes and how smoothness can be read from the behavior of the covariance function. The proofs are outlined or referenced everywhere, but some difficulties of integration theory (hidden in the spectral theorem) are conveniently avoided. On balance, the exposition is quite thorough (e.g., a complete proof of Kolmogorov’s continuity criterion is given) and the reader will easily understand some delicate aspects of jumps properties and the construction of stochastic integrals with deterministic integrands with respect to a random measure.

Chapter 3 concerns spectral representation of stationary processes, and this is the crux for the presentation of subsequent chapters, for it allows to mix simultaneously random and deterministic approaches. The price to pay is the unavoidable manipulation of complex-valued processes, but the presentation is complete and reader friendly. After a complete exposition of the spectral representation of stationary processes, some elements of the Gaussian case are presented, and also, and this is quite helpful, some results on counting processes, including the Bartlett spectrum. I like in particular that the book gives results on counting processes and Gaussian processes at the same level, a strong point.

Chapters 4 and 5 are about linear filters with a view toward applications in signal processing. Some classical examples (linear oscillators) or more recent ones (shot noises) are developed, and the unavoidable ARMA filters are developed at length, in an almost deterministic-like approach, thanks to the spectral theorem, as well as the celebrated Nyquist-Shannon sampling theorem of band-limited signals and the use of the Hilbert transform and Karhunen-Loève expansions.

The last part of the book treats relatively independent topics. Chapter 6 goes back to a more probabilistic topic by presenting some elements of ergodic theory and mixing, while Chapter 7 and 8 deal about respectively random fields and level-crossing and excursion. Chapter 8 is original and more demanding.

Overall, Stationary Processes manages to present a wide topic of applied mathematics and does not fall off from the thin ridge that lies between the probabilistic and the more signal process (deterministic) representation of stationary processes. A lot of material can be found therein, and it will be very helpful to young researchers.

The price to pay for such a broad treatment of the topic is that some proofs are omitted, and, at a personal level, I regret that no statistical inference type results are given as all the necessary material is presented (although this would require an unreasonable increase in the length of the book).

—Marc Hoffmann, Universitè Paris–Dauphine Sørensen

Statistical Methods for Stochastic Differential Equations

Edited by M. Kessler, A. Lindner, and M. Sørensen

Statistical Methods for Stochastic Differential Equations

Hardcover: 507 pages

Publisher: Chapman and Hall/CRC

Language: English

ISBN-13: 978-1439849408

Statistical Methods for Stochastic Differential Equations is a revised version of the papers and lectures given at the seventh Sèminaire Europé en de Statistique held at La Manga, Spain, in May 2007. The participants of the summer school were mainly composed of young scientists, and the chapters of the book are mainly intended for such an audience.

The first three chapters, of approximately 100 pages each, consist of augmented lectures notes by the three main lecturers, Michael Sørensen, Per Mykland, and Jean Jacod. The book is appended with four additional chapters in a more traditional format of research papers or review papers on different and complementary topics on statistical and numerical methods for SDE. The contributors are all renown specialists in the field: O. Papaspiliopoulos and G. Roberts on importance sampling for estimation in SDE (Ch. 4); F. Comte, V. Genon-Catalot, and Y. Rozenholc on nonparametric estimation of ergodic diffusions (Ch. 5); P. Brockwell and A. Lindner for Ornstein-Uhlenbeck–related models driven by Lèvy processes (Ch 6); and G.A. Pavliotis, Y. Pokern, and A.M. Stuart on multiscale diffusion estimation.

Although the last four chapters are generally well written, informative, and cover a wide range of different aspects of statistics for SDE, I will mainly concentrate my review on the first three chapters that constitute an original and very useful contribution in a field that too often has the reputation of being technical and somehow austere. Estimating functions for diffusion processes that are observed at discrete times are the analogous of Z-estimators in parametric density estimation for identically distributed and independent random variables. However, discrete data for diffusion are (almost) never i.i.d. they rather form (homogeneous) Markov chains with a transition density which is (almost) never tractable, forbidding maximum likelihood estimation. They are thus an essential tool for frequentist parametric estimation. The field has some history, going back to the 1980s, but it is mainly under the input by the Danish school and M. Sørensen in particular that it was developed in the late 1990s to reach its maturity in the early 2000s. The chapter by M. Sørensen gives a thorough account of the state-of-the-art. Under different asymptotic sampling schemes, different optimal procedures are derived (from the classical Godambe criterion to the more recent notion of small ∆ optimality). The exposition combines martingale techniques and ergodic theory within a framework of stochastic calculus for SDE and is quite reader friendly, especially for young researchers. It is noteworthy that the chapter is completed by recent results of non-Markovian models, such as those arising from the observation of integrated diffusions, and that have met some success recently, especially for financial applications.

Financial applications is mainly the theme of Chapter 2 by Per Mykland and it gives a recent account of statistical estimation for discretely observed diffusion, in a more financial econometrics orientation. The framework is quite different from that of Chapter 1: In a continuous semimartingale framework, only a finite horizon is available to the statistician, and the asymptotic information grows not because of longer time, but rather because of more frequent observations. The main theme is the estimation of the integrated volatility, and the essential novelty compared to other accounts of this well studied problem is that relatively arbitrarily sampling schemes are considered, when the observation are not synchronous in time, including the possibility that they are random (in a certain sense). This is of major importance if high-frequency financial data are considered. In this context, the important correction required by microstructure noise has to be incorporated (at small observation scales, the models depart from standard continuous Itô semimartingales) and this is treated at length in this chapter. I find the exposition of the chapter particularly well written and with an easy access for the nonspecialist, although most of the underlying mathematical difficulties arising in this context are well identified.

Chapter 3 by Jean Jacod essentially covers the same area as Chapter 2, although in a quite different but very complementary way. Jacod covers the problem of estimating high-frequency functionals of a discretized Itô semimartingale in a more traditional and thorough probabilist way, using the powerful and classical tools of limit theory for semimartingales, yet too often considered as very technical.

Chapter 3 presents all the available results in this setting (from the simple convergence of the quadratic variation to the quite sophisticated estimation of the Blumenthal-Gettor index of a semimartingale) in a simple and illuminating way. Some essential ingredients about the characteristics of semimartingales are recalled and provide with a very useful material for the reader unfamiliar with the quite involved general theory of stochastic processes. The (admittedly more demanding) proofs are delayed to a specific section, so that they can be omitted in a first reading, or worked out in details for the more advanced reader. Chapters 2 and 3 are thus complementary and will help students, teachers, and researchers in this area at various levels. It is a nice surprise that they can appear in the same book.

Without describing in detail the other four contributions that are all important in the field and easy to read, it is clear from this review that I strongly recommend the book for anyone interested in the wide topic of statistical methods for SDE, whether she or he is a specialist or a student starting in the field.

—Marc Hoffmann, Université Paris-Dauphine