Courses / Statistical Methods in Research

Course Description

The course covers statistical methods in human and technology studies or experiments. The course starts with a contrast of hypothesis-driven research supported by statistical inference versus rigorous deduction based on first principles; this is in order to delineate the current from the past mode of science and motivate the subject. Then, it proceeds in a step-wise manner building the student’s background in the statistical tools of the trade.

Although the introduction and methodological sections of papers differ from discipline to discipline (e.g., algorithms vs. assays), the results section of papers should look similar, according to currently accepted best practices. The produced data should be derived according to appropriate study/experimental designs and should be subjected to relevant statistical tests. There is no such thing as statistics for computer scientists or statistics for biologists; statistics is the same for everybody. However, certain disciplines tend to use some tools more than others, and instruction needs to be tailored according to the differing educational backgrounds. In computer science in particular, awakening to standard analysis of study/experimental results has been slow; most of this analysis used to be carried out heuristically. This has changed the last few years and several computer science disciplines have already adopted statistical methods as the standard in results analysis, while others are bound to follow sooner or later. Among the computer science communities that are at the forefront of this movement are the Human-Computer Interaction and Computer Vision communities. The Statistical Methods course aims to cover this need and is paced taking into account the typical background of graduate students in computer science. It is very practical in its orientation (no proofs), emphasizing the understanding of concepts and the ability to apply the right design or test to the right problem.

The main part of the course starts with the delineation between continuous and discrete variables and the enormous implication that this carries for the selection of tests. Then, it proceeds with the description of distributions, probabilities and error types that are fundamental to the construction of the t-tests, ANOVA tests, and non-parametric tests. In the second stage, the course visits regression in its various forms, completing the coverage of significance and association tests used in almost all scientific papers. The treatment of data collection comes next, although one would expect it earlier. The reason for this delay is emphasis on quality control methods in field data collection, which requires knowledge of statistical testing and association. In the last stage of the course’s main part, we visit the various experimental designs, including new methods, the so-called Mixed Methods. Before start analyzing data, one needs to know according to which principle to collect these data in order to address her/his hypothesis; for this, s/he needs to pick the right design. Even an impeccable testing will not save the day if the researcher picked the wrong study or experimental design (garbage in – garbage out). Hence, the student acquires towards the end of the course a 30,000 feet view of the scientific process, solidifying her/his ability to design, collect, and test.

The emergence of the statistical design of studies/experiments and the statistical analysis of results coincides with the spread of interdisciplinary research projects. These projects involve many people from many disciplines and last for a long time. An example of such a booming discipline in Computer Science is Wireless Health, where computer scientists, medical doctors, and social scientists are involved. Because heuristic analysis of results is no longer an option (where iffy outcomes may be claimed as improvements), it is entirely possible that the team finds after several years and millions of dollars that they wasted their time and their resources.

The course has three homework assignments to reinforce the understanding of the three main segments of the course and a short one-page essay to cover the culminating lecture on research attitudes. In the place of exam, the course has a semester long-project, where a problem is defined for the class, and then each student is required to come up with a study design, collect/quality control data, and perform tests, putting everything in the form of a term paper. In 2014 the theme of the course was career quantification of computer-science professors and their reflection on departments, based on openly available publication and funding data. The question was whether objective performance data was in step with the departmental ranking reported by the U.S. News Report or not. Project themes may change from year to year to keep things interesting.

The students need to know R in order to process the data and plot accordingly. In this respect, we will hold a tutorial class on R to get started, and the rest will be easy to pick up as we go along. Please note that after each lecture, we will also hold interactive programming sessions in R that will give students invaluable hands-on experience.

Spring 2017

Spring 2016

Spring 2015

Like us!
Follow us!