Probability and Statistical Inference. Robert Bartoszynski
Чтение книги онлайн.

Читать онлайн книгу Probability and Statistical Inference - Robert Bartoszynski страница 11

Название: Probability and Statistical Inference

Автор: Robert Bartoszynski

Издательство: John Wiley & Sons Limited

Жанр: Математика

Серия:

isbn: 9781119243823

isbn:

СКАЧАТЬ edition of: Probability and statistical inference / Robert

      Bartoszyński, Magdalena Niewiadomska‐Bugaj. 2nd ed. c2008. | Includes

      bibliographical references and index.

      Identifiers: LCCN 2020021071 (print) | LCCN 2020021072 (ebook) | ISBN

      9781119243809 (cloth) | ISBN 9781119243816 (adobe pdf) | ISBN

      9781119243823 (epub)

      Subjects: LCSH: Probabilities. | Mathematical statistics.

      Classification: LCC QA273 .B2584 2021 (print) | LCC QA273 (ebook) | DDC

      519.5/4--dc23

      LC record available at https://lccn.loc.gov/2020021071

      LC ebook record available at https://lccn.loc.gov/2020021072

      Cover Design: Wiley

      Cover Images: (graph) Courtesy of Magdalena Niewiadomska‐Bugaj, Colorful abstract long exposure pictures © Artur Debat/Getty Images

      – MNB

      You have in front of you the third edition of the “Probability and Statistical Inference,” a text originally published in 1996. I have been using this book in the classroom since then, and it has always been interesting to see how it serves the students, how they react to it, and what could still be done to make it better. These reflections prompted me to prepare a second edition, published in 2007. But academia is changing quickly; who the students are is changing, and how we should teach to help them learn is changing as well. This is what made me consider a third edition. The response from Wiley Publishing was positive and my work began.

      There were three main changes that I saw as necessary. First, adding a chapter on the basics of Bayesian statistics, as I realized that upper level undergraduate students and graduate students needed an earlier introduction to Bayesian inference. Another change was to make the book more appropriate for the flipped classroom format. I have experimented with it for three years now and it is working quite well. The book introduces and illustrates concepts through more than 400 examples. Preparing the material mainly at home gives students more time in class for questions, discussion, and for problem solving. I have also added over 70 new problems to make the selection easier for the instructor. A third change was including an appendix with an R code that would help students complete projects and homework assignments. My two‐semester class based on this text includes three projects. The first one –in the fall semester–has students present applications of selected distributions, including graphics. Two projects for the spring semester involve resampling methods. The necessary R code is included in the appendix.

      There are many people to whom I owe my thanks. First, I would like to thank Wiley Editor Jon Gurstelle, who liked the idea of preparing the third edition. After Jon accepted another job elsewhere, the book and I came under the excellent care of the Editorial Teams of Mindy Okura‐Mokrzycki, Kathleen Santoloci, Linda Christina, and Kimberly Monroe‐Hill who have supported me throughout this process. I would also like to thank Carla Koretsky, the Dean of the College of Arts and Sciences at Western Michigan University, and WMU Provost, Sue Stapleton, for granting me a semester‐long administrative sabbatical leave that significantly sped up the progress of the book.

      MNB

      November 2020

      The first edition of this book was published in 1996. Since then, powerful computers have come into wide use, and it became clear that our text should be revised and material on computer‐intensive methods of statistical inference should be added. To my delight, Steve Quigley, Executive Editor of John Wiley and Sons, agreed with the idea, and work on the second edition began.

      Unfortunately, Robert Bartoszyński passed away in 1998, so I was left to carry out this revision by myself. I revised the content by creating a new chapter on random samples, adding sections on Monte Carlo methods, bootstrap estimators and tests, and permutation tests. More problems were added, and existing ones were reorganized. Hopefully nothing was lost of the “spirit” of the book which Robert liked so much and of which he was very proud.

      This book is intended for seniors or first‐year graduate students in statistics, mathematics, natural sciences, engineering, and any other major where an intensive exposure to statistics is necessary. The prerequisite is a calculus sequence that includes multivariate calculus. We provide the material for a two‐semester course that starts with the necessary background in probability theory, followed by the theory of statistics.

      What distinguishes our book from other texts is the way the material is presented and the aspects that are stressed. To put it succinctly, understanding “why” is prioritized over the skill of “how to.” Today, in an era of undreamed‐of computational facilities, a reflection in an attempt to understand is not a luxury but a necessity.

      Probability theory and statistics are presented as self‐contained conceptual structures. Their value as a means of description and inference about real‐life situations lies precisely in their level of abstraction—the more abstract a concept is, the wider is its applicability. The methodology of statistics comes out most clearly if it is introduced as an abstract system illustrated by a variety of real‐life applications, not confined to any single domain.

      In the material that is seldom included in other textbooks on mathematical statistics, we stress the consequences of nonuniqueness of a sample space and illustrate, by examples, how the choice of a sample space can facilitate the formulation of some problems (e.g., issues of selection or randomized response). We introduce the concept of conditioning with respect to partition (Section 4.4); we explain the Borel–Kolmogorov paradox by way of the underlying measurement process that provides information on the occurrence of the condition (Example 7.22); we present the Neyman–Scott theory of outliers (Example 10.4); we give a new version of the proof of the relation between mean, median, and standard deviation (Theorem 8.7.3); СКАЧАТЬ