The first topic deals with fundamental underpinnings of quantitative research. What is the language of research, what is validity, why is it important and what types of validity are distinguished. The topic also addresses ethical issues and principles in research.
 By definition, empirical research is based on data. Unless a census is feasible and appropriate, sampling becomes a very important aspect. Beware, the most creative statistical analyses will not make up for flawed sampling. Therefore, better think twice about your sampling strategy. The topic also introduces us to the issue of external validity and key terms of sampling such as probability and non-probability sampling.
 As a rule quantitative research requires measurement as a type of quantification. According to the main stream concept of measurement in the social sciences, measurement can take place a different levels. These will discussed as well as the consequences for data analysis. Finally, quality criteria of measurement (specifically reliability and validity) will be addressed. Whether you will develop your own measurement instruments or use existing ones, you should know what to look for at the end of this course.
 While experiments are becoming more and more popular among social scientists, a lot of data is collected (and can only be collected) through surveys. Thus, we will discuss principles of good survey research. This includes types of surveys, how to select a survey method, how to construct a survey, what kind of questions is appropriate, how should they be phrased and ordered, how should a response scale look like, what are the pros and cons of survey research.
 Now that you have an idea what measurement means and what its goals are, we look at selected methods of scaling and index construction. Specifically, we will learn about Thurstone scaling, Likert scaling (very widespread) and Guttman scaling. If there is some time left, we might briefly look at other approaches as well.
 Regardless of the type of research you plan to do, design is always fundamental. At first, we discuss internal validity and take about fundamentals of establishing cause and effect. Then we talk about various threats that occur in single or multiple group designs.
 It is often argued that experiments are the best way, some say the only way, to investigate causal claims. Thus, the experimental design is of utmost importance. Even if you do not intend to run your own experiments, you may refer to published work using experiments. This unit deals with two-group experimental designs, probabilistic equivalence and random selection and assignment - basics of experimental research.
We will also address factorial designs, the randomized block designs, covariance designs and hybrid experimental designs.
 True experiments are not always doable. Then quasi-experimental designs are an option. We will learn about the nonequivalent groups design, the regression-discontinuity design and other quasi-experimental designs
[9 & 10] Once you collected the data, you will be ready for analysis. We will be introduced to data preparation, data description, and elementary statistics such as correlation coefficients.
Furthermore, we will deal with fundamental inferential statistics such as the t-test. The concept of dummy variables will also be explained.
Going full circle, we will come back to conclusion validity, threats to conclusion validity and ways to improve it.
- Analysis I: Conclusion Validity/ Threats to Conclusion Validity/Improving Conclusion Validity/ Statistical Power/ Data Preparation/ Descriptive Statistics/ Correlation;
- Analysis II: Inferential Statistics / The T-Test/ Dummy Variables/ General Linear Model Post test-Only Analysis/ Factorial Design Analysis/ Randomized Block Analysis/ Analysis of Covariance