Computer science and artificial intelligence in particular have no curriculum in research methods, as other sciences do. This book presents empirical methods for studying complex computer programs: exploratory tools to help find patterns in data, experiment designs and hypothesis-testing tools to help data speak convincingly, and modeling tools to help explain data. Although many of these techniques are statistical, the book discusses statistics in the context of the broader empirical enterprise. The first three chapters introduce empirical questions, exploratory data analysis, and experiment design. The blunt interrogation of statistical hypothesis testing is postponed until chapters 4 and 5, which present classical parametric methods and computer-intensive (Monte Carlo) resampling methods, respectively. This is one of few books to present these new, flexible resampling techniques in an accurate, accessible manner.
Much of the book is devoted to research strategies and tactics, introducing new methods in the context of case studies. Chapter 6 covers performance assessment, chapter 7 shows how to identify interactions and dependencies among several factors that explain performance, and chapter 8 discusses predictive models of programs, including causal models. The final chapter asks what counts as a theory in AI, and how empirical methods -- which deal with specific systems -- can foster general theories. Mathematical details are confined to appendixes and no prior knowledge of statistics or probability theory is assumed. All of the examples can be analyzed by hand or with commercially available statistics packages.
The Common Lisp Analytical Statistics Package (CLASP), developed in the author's laboratory for Unix and Macintosh computers, is available from The MIT Press.
A Bradford Book
We publiceren alleen reviews die voldoen aan de voorwaarden voor reviews. Bekijk onze voorwaarden voor reviews.