Modeling Sources of Self-report Bias in a Survey of Drug Use Epidemiology

https://doi.org/10.1016/j.annepidem.2004.09.004Get rights and content

Purpose

Well-documented errors in the reporting of drug-related behaviors have been attributed to several sources. These include: 1) respondent difficulties in understanding survey questions, 2) problems in recalling the information necessary to accurately answer these questions, and 3) social pressures that discourage accurate reporting. We report covariance structure models designed to simultaneously evaluate each of these potential sources of error.

Methods

Data examined are from a community survey of 627 Chicago adults which collected drug use self reports (via ACASI technology), multiple biological samples (including hair, urine, and saliva) that permit self-report validation, and a series of probes designed to collect systematic information regarding respondent comprehension and memory difficulties and social desirability concerns. These three sets of information were employed to construct latent variable covariance structure models that enabled an evaluation of the effects of each potential source of reporting error on the quality of drug use reporting.

Results

Social desirability concerns were predictive of discordant drug use reporting and drug use under-reporting. Memory difficulties were predictive of drug use over-reporting. Differences in the predictive power of these variables were found across race/ethnic groups.

Conclusions

Both memory difficulties and social desirability concerns are independent sources of measurement error in surveys of drug use epidemiology.

Introduction

Survey measurement error continues to be a serious problem in epidemiologic and other health-related research. For example, although surveys retain their status as the primary methodology for monitoring substance use patterns in the United States, concerns regarding the quality of self-reports of illicit behaviors remain and challenge the credibility of this research (1). Considerable effort has to date been invested in assessing potential sources of measurement error and testing innovations designed to improve the quality of drug use self-reports 2, 3. Some of this research has focused on elements of the survey task 4, 5 and/or the interviewer 6, 7.

Much of this research has also concentrated on evaluating characteristics of the respondent that may be associated with the accuracy of substance use reporting 8, 9, 10, 11. This work has in large measure been driven by the assumption that measurement error in substance use reporting is primarily a consequence of social desirability concerns. The illegal nature of most recreational drug use and the social stigma often associated with it is believed to provide many respondents with adequate motivation to deliberately under-report, or deny altogether, use of these substances. The desire to maintain a harmonious exchange with an interviewer is viewed as an additional motivation for under-reporting. Hence, deliberate under-reporting, motivated by confidentiality fears and/or the wish to avoid an uncomfortable social exchange, is widely believed to be the primary mechanism responsible for measurement error in substance abuse research. Schaeffer (12) has organized a framework for understanding the perceived risks and losses that respondents may associate with answering truthfully when asked questions about threatening topics such as substance use.

Concerns with privacy, face-saving, and the threat of criminal sanctions are also believed to be mechanisms underlying apparently robust race/ethnic differences in the quality of substance use reporting. A recently completed review by Johnson and Bowman (13) documented over 30 studies in which the reliability and/or validity of substance use reports varied significantly among survey respondents of differing race/ethnic backgrounds. In most cases, minority group membership was associated with poorer quality reporting of substance use behaviors. These differences were attributed to a cluster of factors related to social desirability concerns, including greater emphasis on confidentiality, privacy and harmonious social interactions, greater suspicion of research motives, and greater concern with criminal prosecutions in minority communities.

It should be noted that the social desirability framework described above also makes the implicit assumption that respondents are able to accurately comprehend the survey questions and retrieve from memory the information necessary to construct correct answers. Indeed, these assumptions would appear to be accepted by many in the research community as an article of faith. In particular, the relative scarcity of studies that investigate question comprehension and memory retrieval as potential sources of measurement error in substance abuse research seems to support this conclusion. The few available studies addressing cognitive processing suggest their relevance. Ethnographic work, for example, has documented that the names of drugs communicated in survey questions may not be consistent with the names associated with those in the community (14). This may be a problem that is somewhat unique to drug abuse research, given ever-changing street drug terminology as new drugs become available and as use patterns change. Drug use vocabulary is also likely to vary across regions. Consequently, questionnaire wording may not convey the same meaning to respondents that researchers assume it does, and personal definitions of various drugs may often override those provided in survey questions (15). Surveys may also elicit inaccuracies regarding the details surrounding drug use experience, such as frequency of use (16). Other methodological research suggests broad variability in respondent interpretations of survey questions 17, 18. Innovations such as audio computer-assisted self-interviewing (ACASI), originally designed to improve reporting by reducing social desirability pressures and minimizing reading comprehension difficulties (19), may possibly undermine subject's comprehension of surveys by decreasing interviewers' potential role in clarifying questions and resolving misunderstandings that may arise about the meanings of objective behavioral questions.

Respondent memory has received considerable attention in recent years as a source of survey measurement error 20, 21, 22. Although research has been successful in using cognitive interventions to assist respondents in accessing relevant health-related memories and improving responses (23), there are few efforts to use knowledge of these processes to improve substance use reporting. One exception is a study reported by Hubbard (24), who conducted two experiments in which variations of an anchoring manipulation that included the use of a calendar were used to assist respondents in framing their responses. Although findings generally indicated that these procedures were not powerful in improving recall, one experiment was successful in increasing reports of lifetime behaviors, suggesting the need for further investigation. Involvement with certain substances may create irreversible memory impairment 25, 26, 27, a topic that has received little attention in the survey research literature.

We are aware of no research that attempts to simultaneously evaluate the effects of these various cognitive processes on substance use reporting error. Doing so would be useful for determining the degree to which the conventional wisdom regarding the primacy of social desirability concerns in substance use reporting is correct and/or the degree to which other elements of information processing such as question comprehension and memory also contribute to various types of reporting error. The goal of this article is to examine the relative effects of comprehension, memory, and social desirability on the accuracy of self-reported drug behaviors using a community sample. Potential race/ethnic differences in these processes will also be explored. In conducting these analyses, we hypothesize that each may influence substance use reporting error. Additionally, we hypothesize that the impact of these three processes will not be consistent across race/ethnic subgroups.

Section snippets

Methods

The data for this study come from a multi-stage area probability survey of Chicago residents that was conducted between June 2001 and January 2002 (28). At stage 1, census tracts in Chicago were randomly selected. At stage 2, one block was randomly selected from within each sampled tract. At stage 3, every household on the sampled block was screened for eligibility. At stage 4, one 18- to 40-year-old adult was selected at random from within each eligible household (29). Interviews were

Results

Table 1 presents answers to each debriefing probe for those respondents (n = 556) who also provided biological specimens. As described earlier, each item was measured on a seven-point scale, and all variables were coded such that higher values represented greater levels of self-reported difficulty in responding to the set of drug use questions included in the survey.

T-test comparisons of each debriefing probe by whether or not respondents had provided a discordant drug use report revealed several

Discussion

Consistent with our hypothesis, we found evidence that both memory difficulties and social desirability concerns may each be independent sources of measurement error in surveys of drug use epidemiology. Comprehension difficulties were not associated with measurement error in the models examined. When misreporting is decomposed into over- vs. under-reporting, however, it appears that social desirability concerns are primarily associated with drug use under-reporting, while over-reporting is

References (46)

  • A.R. Morral et al.

    Hardcore drug users claim to be occasional users: Drug use frequency underreporting

    Drug Alcohol Depend

    (2000)
  • M. Fendrich et al.

    Drug test feasibility in a general population household survey

    Drug Alcohol Depend

    (2004)
  • P. Miller

    Is “up” right? The national household survey on drug abuse

    Public Opinion Quarterly

    (1997)
  • M. Fendrich et al.

    Pathways and obstacles in drug use measurement

    Special Issue of the Journal of Drug Issues

    (2000)
  • L. Harrison et al.

    The Validity of Self-reported Drug Use: Improving the Accuracy of Survey Estimates

    (1997)
  • W.S. Aquilino et al.

    Interview mode effects in drug use surveys

    Public Opinion Quarterly

    (1990)
  • R. Tourangeau et al.

    Asking sensitive questions: The impact of data collection mode, question format, and question context

    Public Opinion Quarterly

    (1996)
  • M. Fendrich et al.

    Validity of drug use reporting in a high risk community sample: A comparison of cocaine and heroin survey reports with hair tests

    Am J Epidemiol

    (1999)
  • B. Mensch et al.

    Underreporting of substance use in a national longitudinal youth cohort: Individual and interviewer effects

    Public Opinion Quarterly

    (1988)
  • H.M. Colon et al.

    The validity of drug use responses in a household survey in Puerto Rico: Comparison of survey responses of cocaine and heroin use with hair tests

    Int J Epidemiol

    (2001)
  • M. Fendrich et al.

    The impact of interviewer characteristics on drug use reporting by male juvenile arrestees

    Journal of Drug Issues

    (1999)
  • L.D. Johnston et al.

    The recanting of earlier reported drug use by young adults

  • A. Stueve et al.

    Inconsistencies over time in young adolescents' self-reports of substance use and sexual intercourse

    Subst Use Misuse

    (2000)
  • N.C. Schaeffer

    Asking questions about sensitive topics: A selective overview

  • T.P. Johnson et al.

    Cross-cultural sources of measurement error in substance use surveys

    Subst Use Misuse

    (2003)
  • L.J. Ouelett et al.

    “Crack” versus “rock” cocaine: The importance of local nomenclature in drug research and education

    Contemporary Drug Problems

    (1997)
  • M. Hubbard et al.

    Effects of decomposition of complex concepts

  • M.F. Schober et al.

    Misunderstanding standardized language in research interviews

    Applied Cognitive Psychology

    (2004)
  • A. Suessbrick et al.

    Different respondents interpret ordinary questions quite differently. Proceedings of the American Statistical Association, Section on Survey Research Methods

    (2000)
  • C.F. Turner et al.

    Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology

    Science

    (1998)
  • E. Blair et al.

    Cognitive processes used by survey respondents to answer behavioral frequency questions

    Journal of Consumer Research

    (1987)
  • W.J. Friedman

    Memory for the time of past events

    Psychol Bull

    (1993)
  • S. Sudman et al.

    Thinking About Answers: The Applications of Cognitive Processes to Survey Methodology

    (1996)
  • Cited by (178)

    • What is the prevalence of drug use in the general population? Simulating underreported and unknown use for more accurate national estimates

      2022, Annals of Epidemiology
      Citation Excerpt :

      However, national estimates of drug use, which are predominately based on household surveys, are highly susceptible to measurement error from both intentional and unintentional misreporting of drug use, leading to underreporting [4–6]. Intentional underreporting may result from individuals’ reluctance to report an illegal and stigmatized behavior [7,8] due to fears of identification and judgement by interviewers and adverse personal or legal repercussions of disclosing drug use [9]. Researchers have tried to mitigate intentional underreporting by emphasizing confidentiality and employing methods to increase the privacy of reporting; yet, measurement error still remains [10].

    View all citing articles on Scopus

    This research was supported by National Institute on Drug Abuse Grant #R01DA12425. Data collection for this study was carried out by the University of Illinois Survey Research Laboratory (SRL).

    View full text