Electronic source materials in clinical research: Acceptability and validity of symptom self-rating in major depressive disorder☆
Introduction
Clinical research depends upon the accurate and timely collection of information regarding human participants, whether the research employs cross-sectional descriptions (e.g., features of an illness) or longitudinal observations (e.g., those made during a treatment trial). The efficient and effective management of this information-gathering process is of vital interest to the large number of researchers currently conducting clinical research. As of early March 2006, the National Institute of Health’s online registry of clinical research programs (www.clinicaltrials.gov) listed over 27,000 protocols from over 120 nations, with several hundred protocols being added each month. The magnitude of the information-gathering task varies from project to project, but can range from simple to quite complex. One example of a complex information-gathering task is the NIH’s Sequenced Treatment Alternatives to Relieve Depression (STAR∗D) project (Fava et al., 2003, Rush et al., 2004), a large clinical trial which enrolled 4041 participants at 14 Regional Centers across the US. Intake assessments for each participant involved recording responses to 17 structured instruments, ranging from five to 126 items each, to gather demographic data, current and past clinical information, and indices of level of function. Participants then underwent clinical assessments every two to three weeks for up to 48 weeks during treatment, and quarterly assessments for a further year in follow up. Seven instruments were administered at each clinical assessment with a range of 3–16 items per instrument, and 11 instruments were administered at the quarterly follow-up assessments. In all, this one study involved the gathering of a considerable amount of data, and can serve as an estimate for what other multi-site treatment trials may face.
Historically, information has been initially collected on printed forms (“source documents”) and then entered into an electronic format at a later date. Before this information is regarded as suitable for analysis, research procedures include steps for checking for errors, omissions and inconsistencies, and for resolving these issues. For trials designed to support the registration of new medications with the Food and Drug Administration, there are specific statutory and regulatory requirements regarding the content of research instruments and the processes used to gather information from participants. Projects like STAR∗D that are not subject to these requirements will nevertheless follow many of the same general principles and practices.
The sheer volume of data collected in this way poses numerous challenges to the efficient conduct of high quality clinical research, including:
- •
the amount of time participants and/or clinical raters spend on gathering the observations and recording them on source documents;
- •
the amount of time staff members spend entering data from paper records into electronic format;
- •
the amount of time spent on anomaly and error detection and on correction and reconciliation, a process often termed “data cleaning” or “data edits”;
- •
the impact on data accuracy when error detection occurs in a timeframe when the errors can no longer be corrected (e.g., there is no longer access to the research subject);
- •
the financial and time costs of printing the forms and sending them to and from research sites, whether via physical shipment or electronic transmission (e.g., faxing);
- •
the costs of delays while these processes are carried out (delays in reporting findings for academicians and delays in bringing a new product to market for industry);
Clearly opportunities exist for improving speed, accuracy, and quality, and thus reducing the financial and time costs of conducting clinical research. One possible method for achieving these goals is to gather the data directly into an electronic source document, thereby eliminating the cost of printing and sending paper documents, and entering them into electronic format. In this project, we examined the direct acquisition of clinical data into an electronic source document using a device for completing a computerized self-report instrument.
Interaction with electronic data acquisition systems is not new to the public. Most individuals are familiar with conducting bank transactions via automated teller machines (ATMs), subscribing to pay-per-view entertainment programs on cable or satellite television via a handheld remote control, and obtaining customer service using automated “phone tree” systems via their telephone. Web-based online banking and sales of merchandise have become commonplace. However, widespread exposure to these technologies does not guarantee that clinical research participants will find recording information directly onto an electronic source document as acceptable and easy to use as the traditional solutions of paper forms and in-person interactions.
Interactive voice response (IVR) systems have been used in clinical research for a number of years to collect information over the telephone. A recent report by Mundt and colleagues (2006) found good agreement between clinician-administered ratings on the Montgomery–Asberg Depression Rating Scale (Montgomery and Asberg, 1979) and an IVR-administered self-rated version of the scale. While these systems have expanded the range of tools for researchers to use, the instructions and information provided to participants verbally (“press ‘1’ if you are feeling ...”) are presented at a rate that may be too rapid for some participants and too slow and tedious for others. Self-paced questionnaires administered using a computer in the clinician’s office have been used with a number of different instruments and populations (Carr et al., 1986, French and Beaumont, 1987, Chan-Pensley, 1999, Bayliss et al., 2003, Goodhart et al., 2005, Koestler et al., 2005, Kable et al., 2006, van Asselen et al., 2005, Titov and Knight, 2005), but not generally with the instruments commonly used to measure outcomes in clinical trials in depression, such as the Beck Depression Inventory (Beck et al., 1996), the Hamilton Rating Scale for Depression (Hamilton, 1960, Hamilton, 1967), or the Inventory of Depressive Symptomatology (Rush et al., 1986, Rush et al., 1996).
Before an electronic source document approach can be substituted for a traditional paper form in a clinical research project, the acceptability and equivalence of this approach needs to be examined and validated in the relevant population. Ohayon (2006) recently emphasized the need to examine validity of measures and acceptability to patients as prerequisites to adoption of computer-based assessments in psychiatric research. Kurt et al. (2004) tested the use of a client-server computerized approach vs. use of traditional paper and pencil questionnaires in the administration of two instruments, the Center for Epidemiologic Studies of Depression Scale, Revised (CESD-R) and the Geriatric Depression Scale (GDS), to assess depressive symptoms in adults over the age of 65 in the primary care setting. The researchers reported high agreement between paper and computer versions of the instrument and a high acceptance of the electronic version, even though 72% of their sample population reported no previous computer use.
In this study, we examined the use of a mobile “tablet” computer (tablet PC), equipped with a stylus-operated touch sensitive screen, to acquire ratings of depression symptom severity directly from patients with Major Depressive Disorder (MDD). The instrument used was an onscreen version of the 16-item Quick Inventory of Depressive Symptomatology – Self-Rated (QIDS-SR16) (Rush et al., 2000, Rush et al., 2003). This instrument presents 16 questions to rate the presence and severity of symptoms of depression. It calls for numerical responses of 0, 1, 2, or 3 to indicate severity, with brief descriptions serving to anchor each level of severity for each question. The QIDS has been examined in both clinician-rated and self-rated forms, and has been shown to have desirable psychometric properties (Trivedi et al., 2004, Rush et al., 2006). It was the secondary outcome measure in the STAR∗D trial, and so is a prime candidate for future use in clinical trials research.
Our aims in this project were (a) to quantify the agreement between computer-based assessments of depression and their paper versions, and (b) to ascertain the acceptability of using computer-based assessments by adult outpatients with MDD.
Section snippets
Design
This cross-sectional study was performed at four Centers in the NIMH funded Depression Trials Network, a national infrastructure for clinical research in depression: the UCLA Neuropsychiatric Institute & Hospital, the University of Michigan, the University of Pittsburgh, and Virginia Commonwealth University. The University of Texas Southwestern Medical Center served as the National Coordinating Center. The University of Pittsburgh served as the Data Coordinating Center (DCC), developed the
Results
Of 83 patients screened, three were determined to be ineligible (i.e., all three were found not to meet criteria for current major depressive episode) and 80 were enrolled in the study.
Discussion
Five key findings emerged from this study: (a) symptom ratings on the QIDS-SR16 instrument were equivalent whether collected via paper or electronic means; (b) about half of the participants with MDD found the tablet PC approach to be easier to use than the traditional paper format, and rated it as not only acceptable but preferable to the paper format; (c) the majority of participants with MDD rated the tablet PC as acceptable to individuals with MDD; (d) a majority of participants reported
Acknowledgement
This project has been supported with Federal funds from the National Institute of Mental Health, National Institutes of Health, under Contract N01MH90003 to UT Southwestern Medical Center at Dallas (P.I.: A.J. Rush). We would like to acknowledge the editorial support of Jon Kilner, MS, MA, and the secretarial support of FastWord Information Processing Inc. (Dallas, TX).
References (27)
- et al.
Background and rationale for the sequenced treatment alternatives to relieve depression (STAR∗D) study
Psychiatric Clinics of North American
(2003) - et al.
Hypopituitary patients prefer a touch-screen to paper quality of life questionnaire
Growth Hormone & IGF Research
(2005) - et al.
Computer-assisted assessment of depression and function in older primary care patients
Computer Methods and Programs in Biomedicine
(2004) - et al.
Validation of an IVRS version of the MADRS
Journal of Psychiatric Research
(2006) Methodology and assessments: the tools of the trade
Journal of Psychiatric Research
(2006)- et al.
The 16-item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression
Biological Psychiatry
(2003) - et al.
Sequenced treatment alternatives to relieve depression (STAR∗D): rationale and design
Controlled Clinical Trials
(2004) - et al.
An evaluation of the quick inventory of depressive symptomatology and the hamilton rating scale for depression: a sequenced treatment alternatives to relieve depression trial report
Biological Psychiatry
(2006) Diagnostic and statistical manual of mental disorders (DSM-IV)
(1994)- et al.
A study of the feasibility of Internet administration of a computerized health survey: the headache impact test (HIT)
Quality of Life Research
(2003)
Manual for beck depression inventory-II
Automated cognitive assessment of elderly patients: a comparison of two types of response device
British Journal of Clinical Psychology
Alcohol-use disorders identification test: a comparison between paper and pencil and computerized versions
Alcohol and Alcoholism
Cited by (24)
End-user views of an electronic encounter decision aid linked to routine depression screening
2019, Patient Education and CounselingCitation Excerpt :In a study of 80 adults with major depressive disorder (MDD), the 16-item Quick Inventory of Depressive Symptomatology--Self-Rated (QIDS-SR(16)) was completed using both traditional paper forms and an electronic representation of the same questions. Researchers found that participants felt the electronic tablet version to be acceptable and easier to use than the paper forms [40]. Staehili et al. found that electronic risk screening using patient-reported outcome measures (including PHQ-9) offers an efficient approach to improving the identification of behavioral health problems in urban primary care clinics [41].
Using Technology for Evaluation and Support of Patients’ Emotional States in Healthcare
2016, Emotions, Technology, and HealthUtility of the mayo-portland adaptability inventory-4 for self-reported outcomes in a military sample with traumatic brain injury
2013, Archives of Physical Medicine and RehabilitationCitation Excerpt :The MPAI-4 was administered as part of a larger battery of clinical outcomes given routinely to new patients. The data set was screened and found to be complete and without out-of-range values, which may perhaps be attributable to the favorableness of computer-administered outcomes assessments to patients.11-13 Ranges of percentage of responses by category were as follows: category 0 (18%–66.2%), category 1 (9.9%–30.6%), category 2 (5.7%–23.2%), category 3 (3.5%–26.9%), and category 4 (1.2%–49.9%).
Cognitive self-assessment scales in surgical settings: Acceptability and feasibility
2018, Best Practice and Research: Clinical AnaesthesiologyCitation Excerpt :Self-assessment scales are easily distributed to many patients at once and can be collected in a timely manner [45]. Furthermore, Cook et al. [45] suggested that electronic self-assessment scales reduce the financial burden of printing forms and are even more time efficient than paper, thereby minimizing the requirement of data entry. Data from self-assessment tools can drive further research into developing additional scales for other conditions and thus may prove beneficial to both research and clinical settings.
Feasibility, Implementation and Outcomes of Tablet-Based Two-Step Screening for Adult ADHD in Primary Care Practice
2021, Journal of Attention DisordersComputerized Device Equivalence: A Comparison of Surveys Completed Using A Smartphone, Tablet, Desktop Computer, and Paper-and-Pencil
2021, International Journal of Human-Computer Interaction
- ☆
The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government.