Social desirability in survey research: Can the list experiment provide the truth?

The phenomenon of social desirability response bias in survey research has been discussed in social psychology and social science for many years. Distortions often occur when a question or a topic of interest is ‘sensitive’ (Lee, 1993), meaning that it has a potentially embarrassing, threatening or...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Gosen, Stefanie
Beteiligte: Wagner, Ulrich (Prof. Dr.) (BetreuerIn (Doktorarbeit))
Format: Dissertation
Sprache:Englisch
Veröffentlicht: Philipps-Universität Marburg 2014
Schlagworte:
Online Zugang:PDF-Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The phenomenon of social desirability response bias in survey research has been discussed in social psychology and social science for many years. Distortions often occur when a question or a topic of interest is ‘sensitive’ (Lee, 1993), meaning that it has a potentially embarrassing, threatening or stigmatizing character (Dalton, Wimbush, & Daily, 1994). In order to avoid socially desirable responses in self-reports, indirect survey methods were applied. These techniques should guarantee the respondents’ anonymity and thus receive more valid self-reports (Tourangeau & Yan, 2007). One of the methods that is supposed to achieve this goal is the list experiment. In general, the list experiment is able to create an estimate of the proportion of people who agree to a sensitive item. In order to determine the social desirability bias, the estimation of the list experiment is then compared to direct self-report questions. If there is a social desirability bias, the estimate of the list experiment should be higher than the direct self-report question. However, the literature does not provide a consistent picture of the functionality of the list experiment. Furthermore, a few published studies show complications with data collection and the results of the list experiment (Biemer et al., 2005). The reasons for these inconsistencies are often not apparent. Therefore, the aim of this dissertation was to make proper propositions about its validity, consistency, and to find specific factors that determine its ineffectiveness. The dissertation consists of two manuscripts that both evaluate the validity of the list experiment. Manuscript #1 was able to prove the inconsistency of the list experiment in the field of prejudice research on the basis of three different studies including two different survey modes and a panel dataset. In Study 1 (N=229, representative), the list experiment provided results in the expected direction and produced a higher estimate than the direct self-report question. Study 2, (modified repetition, N=445, representative), did not show a significant difference within the two conditions of the list experiment, and the direct self-report item yielded a higher approval rate than the list experiment. In order to test the validity and to find factors that explain the failure of the list experiment, Study 3 (N=1,569, non-representative) compared three different list experiments to each other. The three list experiments provided inconsistent results once again. Furthermore, it could be found a factor that explain the inconsistent results. The essential question was whether the increase on mean level occurs simply because of the higher number of items in the test condition. Hence, four nonsensitive items were compared to five nonsensitive items. The analysis revealed a significant mean difference between the condition with four and the condition with five nonsensitive items. This result implies enormous consequences for the validity of the list experiment itself because the increase of the mean in the test condition depends not only on the content of the particular items but also on the number of items. An additional test-retest panel analysis revealed that respondents give a more stable answer over time when the baseline condition includes only four nonsensitive items. Manuscript #2 was able to find various factors that can partly explain the inconsistent results of the list experiment. Study 1 used cognitive interviews (N=7) to demonstrate that the list experiment was predominantly understood by the respondents, and that the sensitive item was only partially perceived as such. In Study 2 (experimental online study, N=1,878) it was tested whether the sensitive item influenced the agreement to the nonsensitive items (item difficulty). It was found that the approval rate to the nonsensitive items increases when a sensitive item is included. For the list experiment, this result means that the mean level in the test condition increases due to a shift in item difficulty and not due to the content of the sensitive item, as the list experiment presupposes. In Study 3 (replication of Study 2, N=948) the hypotheses were tested again in a slightly varied design. Here, the first hypothesis was confirmed with exclusively nonsensitive items. Study 3 could corroborate the hypothesis that the procedure to indicate the number of yes answers is distorted in general. This finding implies that within the list experiment the indication of the number of items is biased in the baseline and also in the test condition. In sum the results of the two Manuscripts indicate that list experiment is unable to obtain valid and consistent results. The results of this dissertation suggest that in the process of answering a list experiment factors arise that cause distortions and affect the overall functioning of the list experiment. In total, three moderating factors were found that occurred independently of one another or together.
DOI:10.17192/z2014.0228