Skip to main content

Main menu

  • Home
  • Content
    • Current issue
    • Past issues
    • Collections
  • Authors & Reviewers
    • Overview for Authors
    • Preparing manuscripts
    • Submission Checklist
    • Publication Fees
    • Forms
    • Editorial Policies
    • Editorial Process
    • Patient-Oriented Research
    • Submit a manuscript
    • Manuscript Progress
    • Submit a response
    • Information for Reviewers
    • Open access
  • Alerts
    • Email alerts
    • RSS
  • About
    • General information
    • Staff
    • Editorial board
    • Contact
  • CMAJ JOURNALS
    • CMAJ
    • CJS
    • JAMC
    • JPN

User menu

Search

  • Advanced search
CMAJ Open
  • CMAJ JOURNALS
    • CMAJ
    • CJS
    • JAMC
    • JPN
CMAJ Open

Advanced Search

  • Home
  • Content
    • Current issue
    • Past issues
    • Collections
  • Authors & Reviewers
    • Overview for Authors
    • Preparing manuscripts
    • Submission Checklist
    • Publication Fees
    • Forms
    • Editorial Policies
    • Editorial Process
    • Patient-Oriented Research
    • Submit a manuscript
    • Manuscript Progress
    • Submit a response
    • Information for Reviewers
    • Open access
  • Alerts
    • Email alerts
    • RSS
  • About
    • General information
    • Staff
    • Editorial board
    • Contact
  • Subscribe to our alerts
  • RSS feeds
  • Follow CMAJ Open on Twitter
Research

Current use and costs of electronic health records for clinical trial research: a descriptive study

Kimberly A. Mc Cord, Hannah Ewald, Aviv Ladanie, Matthias Briel, Benjamin Speich, Heiner C. Bucher and Lars G. Hemkens; for the RCD for RCTs initiative and the Making Randomized Trials More Affordable Group
February 03, 2019 7 (1) E23-E32; DOI: https://doi.org/10.9778/cmajo.20180096
Kimberly A. Mc Cord
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hannah Ewald
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Aviv Ladanie
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Matthias Briel
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Benjamin Speich
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Heiner C. Bucher
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lars G. Hemkens
Basel Institute for Clinical Epidemiology and Biostatistics (Mc Cord, Ewald, Ladanie, Briel, Speich, Bucher, Hemkens), Department of Clinical Research, University Hospital Basel, University of Basel; University Medical Library (Ewald), University of Basel; Swiss Tropical and Public Health Institute (Ladanie), University of Basel, Basel, Switzerland; Department of Health Research Methods, Evidence, and Impact (Briel), McMaster University, Hamilton, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Tables
  • Related Content
  • Responses
  • Metrics
  • PDF
Loading

Abstract

Background: Electronic health records (EHRs) may support randomized controlled trials (RCTs). We aimed to describe the current use and costs of EHRs in RCTs, with a focus on recruitment and outcome assessment.

Methods: This descriptive study was based on a PubMed search of RCTs published since 2000 that evaluated any medical intervention with the use of EHRs. Cost information was obtained from RCT investigators who used EHR infrastructures for recruitment or outcome measurement but did not explore EHR technology itself.

Results: We identified 189 RCTs, most of which (153 [81.0%]) were carried out in North America and were published recently (median year 2012 [interquartile range 2009–2014]). Seventeen RCTs (9.0%) involving a median of 732 (interquartile range 73–2513) patients explored interventions not related to EHRs, including quality improvement, screening programs, and collaborative care and disease management interventions. In these trials, EHRs were used for recruitment (14 [82%]) and outcome measurement (15 [88%]). Overall, in most of the trials (158 [83.6%]), the outcome (including many of the most patient-relevant clinical outcomes, from unscheduled hospital admission to death) was measured with the use of EHRs. The per-patient cost in the 17 EHR-supported trials varied from US$44 to US$2000, and total RCT costs from US$67 750 to US$5 026 000. In the remaining 172 RCTs (91.0%), EHRs were used as a modality of intervention.

Interpretation: Randomized controlled trials are frequently and increasingly conducted with the use of EHRs, but mainly as part of the intervention. In some trials, EHRs were used successfully to support recruitment and outcome assessment. Costs may be reduced once the data infrastructure is established.

See related article at www.cmaj.ca/lookup/doi/10.1503/cmaj.180841

Randomized controlled trials (RCTs) are the standard for evaluating benefits and harms of medical treatments. However, they are often time consuming and expensive to conduct, and some trials rely on strictly standardized research settings that may limit the generalizability of their results.1 Electronic health records (EHRs) — electronic databases containing patient-level variables that are gathered during routine medical care (Appendix 1, available at www.cmajopen.ca/content/7/1/E23/suppl/DC1) — provide great potential for implementing large and pragmatic trials.2,3 Randomized controlled trials could be integrated directly into routine care, offering almost perfect generalizability of their results.4 Recently, the Patient-Centered Outcomes Research Institute awarded US$332 million to 28 pragmatic clinical studies, many of them using EHR infrastructures and many of them integrated in routine care.5

Great debate regarding the potential barriers and limitations of EHR use in clinical research persists, and further details on these obstacles have been discussed elsewhere.3,6 Briefly, the 2 largest direct advantages of using routinely collected data for clinical trials may be the facilitation of patient recruitment and of outcome assessment. Randomization of treatment may occur directly from the EHR during the patient’s visit, maximizing recruitment rates.7 Recruiting patients through the EHR would allow investigators to prescreen for eligibility before approaching the potential participant, thus enabling tailoring of the efforts toward the appropriate sample; furthermore, rapid consecutive enrolment would favour recruitment through automatic screening and selection of participants through the EHR database.8 This could substantially boost trials requiring large samples or slowly recruiting trials. Yet, the ability to assess outcomes without having to measure or collect them could be the most appealing resource-sparing advantage of EHRs in RCTs. Even when funds are not at issue, just the decrease in logistical difficulties themselves, particularly in large RCTs, could be worth extracting routinely collected EHR data. Thus, the EHR may have an important role in the potential for implementing large and pragmatic trials.2,3 This offers entirely new perspectives on evaluating health care interventions that favour the development of learning health care systems.7

However, the cost associated with implementing the EHR/electronic medical record infrastructure may be substantial.9 Although one could argue that using EHRs for research purposes might lead to more affordable trials, to our knowledge, there is no systematic overview of empirical cost estimates per individual trial participant in EHR-supported RCTs. We conducted a systematic descriptive survey of the use of EHRs in RCTs to determine how EHRs are implemented in clinical research settings and to describe specifically how this technology is used to support recruitment and outcome assessment. We aimed to determine the frequency of use of EHRs and describe possible applications of EHR technology in current practice, focusing on trials that were supported by the EHR rather than evaluating the EHR itself. We also aimed to determine the cost of using EHRs for RCTs.

Methods

We performed a descriptive study assessing the current use of EHR technology in RCTs. We included any RCT in humans published in English since January 2000 that addressed any health-related topic and that used EHRs for any purpose, including participant recruitment, intervention delivery or outcome assessment.10 Focusing on modern technology, we did not include older trials. There were no other eligibility criteria.

Definitions of the EHR and related data vary.10–12 Our working definitions are shown in Appendix 1. Briefly, we considered EHRs an archive of health-related data in digital form, collected during routine clinical care for each individual patient, stored and exchanged securely and accessible by multiple authorized users in a network of care providers.11 The EHR infrastructure used in eligible RCTs must have already existed, and data must have been obtained through a query of the EHR database (i.e., no data fabricated specifically for the experiment would be considered routinely collected, for example, when the trial was about the novel implementation of an EHR v. no such implementation). There is no protocol published for this descriptive study.

Literature search

We searched PubMed (last search on Sept. 13, 2017) for English-language articles published since Jan. 1, 2000 using keywords including “electronic health record,” “electronic medical record,” “health information exchange,” “patient health record” and “e-health” with an established RCT filter13 (Appendix 2, available at www.cmajopen.ca/content/7/1/E23/suppl/DC1). Our search integrated the search strategy for EHRs provided by the United States National Library of Medicine14 and was developed with the support of an information specialist (H.E.). Two reviewers (K.A.M. and H.E. or A.L.) screened titles and abstracts. We obtained any article deemed pertinent by at least 1 reviewer as full text. One reviewer (K.A.M.) evaluated full texts and determined eligibility, and another reviewer (L.G.H) confirmed all exclusions.

Data extraction

We classified eligible RCTs based on the way in which the EHRs were used for 1) patient recruitment in any form, 2) outcome assessment in any form, 3) the trial intervention itself or 4) other possible purposes. For patient recruitment, we considered any effort of identifying trial participants based on certain characteristics that was done through an EHR query, as well as any random allocation of consecutive patients done through the EHR. For outcome assessment, we considered any trial in which any of the outcomes was obtained by querying or manually checking the EHR document (thus, where the outcome was routinely found within the EHR).

We then subclassified included RCTs into 1) EHR-supported trials, in which the EHR was used as research tool for conducting the trial (e.g., when patients with certain conditions are identified as enhanced recruitment strategy or adverse outcomes are queried through a hospital) and 2) EHR-evaluating trials, in which use of an EHR or an EHR modification was evaluated as part of the randomly allocated intervention (e.g., software alteration or addition, such as randomized implementation of a drug interaction alert system in a hospital’s EHR ordering system). Furthermore, we extracted the RCT’s research question, other study characteristics (sample size, country of origin and unit of randomization) and whether the trial included order entry systems (computerized physician order entry system or clinical decision-support system) (see Appendix 1), telehealth or personal health records.

For EHR-supported trials, we also determined the trial setting and more specific EHR uses (type of EHR and application in the trial, such as the type of alerts it would display in decision-support systems). Furthermore, we extracted whether an advanced algorithm for patient identification/recruitment or other purpose was developed. We also recorded whether the recruitment was done prospectively (e.g., by advertisement and invitation, not through the EHR), concurrently (i.e., in the point-of-care setting, through the EHR) or retrospectively (i.e., screening a patient list, through the EHR or not), and whether routinely collected data were the only outcome source or whether a hybrid approach was used. A hybrid approach could be that 1) some outcomes were based on routinely collected data alone and other outcomes were entirely actively collected or 2) some outcomes were measured based on routinely collected data, and this measurement was supplemented by active data collection (e.g., when reported by patients outside an EHR network), or a relevant amount (more than 10% of the total routinely collected data source) was manually checked for validation. We specifically recorded the primary outcome of the trial and whether it was measured with the use of routinely collected EHR data alone, when it was measured (duration of follow-up), and any information on missing data or loss to follow-up. Furthermore, we extracted, for each trial, whether blinding and allocation concealment measures were performed. We searched the full texts for keywords, such as “placebo,” “blind,” “label” and “mask,” to identify such statements and then proceeded with extracting the statement when reported. One reviewer (K.A.M.) extracted all data. We aimed to provide a general overview on potential issues of bias in the EHR-supported studies. Two reviewers (K.A.M. and B.S.) used the Cochrane risk of bias tool,15 and a third reviewer (K.A.M., H.E., B.S. or L.G.H.) verified the assessments. Any disagreement was resolved by discussion.

Trial costs

We contacted the authors of included EHR-supported trials through a standardized email to request cost information and extracted any cost information reported in the publications. We aimed to obtain a cost estimate that would allow comparison with traditional trials. Therefore, we were not interested in costs of EHR-evaluating trials. We explained to the authors that the costs of the trial could have been divided in 3 major ways:16 1) cost of the project/trial development and preparation (e.g., insurance, travel, infrastructure, consulting, sample size calculation, database set-up), 2) cost of enrolment, treatment and follow-up (e.g. per-patient costs, salary costs, patient reimbursement costs, material and/or drug costs) and 3) cost after last patient out (e.g., data-cleaning costs, analysis costs, publication costs). We aimed for only a raw cost estimate and accepted any information we could. We converted cost values to US dollars where applicable, based on the exchange rate on Nov. 1, 2017.17 We sent the data presented here to all trial authors for confirmation.

All costs are reported in US dollars.

Statistical analysis

We report results descriptively using proportions and medians with interquartile ranges. Since our study was exploratory, we did not use any statistical tests.

Ethics approval

As this article does not contain any personal medical information about any identifiable living people, ethics approval was not required.

Results

After 1680 titles and abstracts were screened, 394 potentially relevant articles were obtained as full texts, of which 189 were eligible (Figure 1). Of the 189 RCTs, 17 (9.0%) were supported by EHRs, and in 172 (91.0%), EHRs were used as a modality of intervention (EHR-evaluating).

Figure 1:
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1:

Flow chart showing trial selection. Note: EHR = electronic health record, RCT = randomized controlled trial.

Most of both EHR-supported and EHR-evaluating trials originated from North America (13 [76%] and 140 [81.4%], respectively) and were published recently (median year 2012 [interquartile range 2009–2014]) (Table 1). Three (18%) of the EHR-supported trials and 61 (35.5%) of the EHR-evaluating trials were cluster-randomized. There were no placebo-controlled trials in our sample. The majority (101 [53.4%]) did not report the level of blinding. Blinded outcome assessment was the most frequent type of blinding (35 trials [18.5%]), followed by open label (27 [14.3%]), single-blinding (19 [10.0%]) and double-blinding (7 [3.7%]).

View this table:
  • View inline
  • View popup
Table 1:

Characteristics of randomized controlled trials published in English between January 2000 and Sept. 13, 2017 that used electronic health records

Trials supported by electronic health records

The interventions and settings varied among the 17 EHR-supported RCTs18–34 (Table 2). Five trials (29%) used the EHR of a US Department of Veterans Affairs or affiliated facility. Most trials evaluated quality-improvement interventions, which often involved clinician education and feedback initiatives (8 [47%]), screening programs (4 [24%]), and collaborative care and disease management interventions integrated into primary care settings (3 [18%]). Almost half of the studies (8 [47%]) took place in primary care clinics, 5 (29%) were conducted in health care networks, and 3 (18%) took place in hospitals. One trial18 (6%) was performed entirely within a pharmacy electronic medical record.

View this table:
  • View inline
  • View popup
Table 2:

Characteristics of randomized controlled trials supported by electronic health records

Supported outcome measurement

Fifteen trials (88%) measured outcomes using the EHR (Table 1). The EHR-assessed outcomes were typically screening uptake (e.g., women seeking a Papanicolaou test after receiving an automated call from the EHR prompting cervical cancer screening) (6 trials [40%]), clinical outcomes (4 [27%]), drug adherence (2 [13%]) or guideline-concordant care measures (2 [13%]). In 7 (47%) of the 15 trials, the routinely collected data source was the only source of outcome data in the entire trial. In the remaining 8 trials (53%), a hybrid approach was applied, with some outcome data being collected actively. In 419,20,28,31 of these 8 cases, the primary outcome was fully extracted from an EHR but additional outcomes were collected actively, and in 3 cases,21,22,33 the primary outcome was collected actively but additional outcomes were EHR-based. In 1 case,23 the primary outcome was collected through the EHR but was verified with actively collected data. Twelve (80%) of the 15 trials relied on the EHR for primary outcome assessment.

The median trial duration was 10 months (interquartile range 5–12 mo); in 10 (59%) of the 17 trials, the number of missing data or patients lost to follow-up was reported, but none reported on the quality of the data.

Supported recruitment

Fourteen trials (82%) used the EHR as a tool for patient recruitment (Table 1). A prospective approach was reported in 1 trial,30 and in the remaining 13 trials, the EHR was used retrospectively (i.e., manual check or simple retrospective query of eligible patients via EHR). In addition, 1 trial18 used a complex querying system (another trial27 appeared to, but this was not specifically reported). The remaining 3 trials (18%) used a (traditional) prospective recruitment approach without the use of EHRs.

Costs

We contacted 13 of the 17 corresponding authors of the EHR-supported trials. Emails were undeliverable to 3 addresses, for which we were also unable to find an alternative contact online, and we were not able to reach the authors. We obtained information on trial costs for 418,24,27,30 of the 17 trials and intervention cost data for 1 trial34 (response rate 24%).

Cost information came from 1 Australian trial18 and 4 US trials24,27,30,34 (2 within the Department of Veterans Affairs network27,34). The costs varied from $67 750 to $5 026 000 (median $86 753) for total trial costs and from $44 to $2000 (median $315) for per-patient costs (Table 3). Overall trial costs were derived from funding budgets in 3 cases,18,2,27 and 1 author stated that the overall costs were $2000 per patient.30 In the trial in which the EHR database was leveraged through automated data extraction,18 the per-patient cost was $44. In the 2 trials in which the extraction of study data from the EHR source was done manually,24,30 the per-patient cost was $560 and $2000, respectively. We have no information in this regard for 1 trial.27 For the trial that presented only the costs of the intervention (extracting data from the EHR to give feedback to health care providers),34 a cost of $44 per patient was reported when the data were extracted manually, and a sensitivity analysis indicated that this cost could decrease to $9 if the data were extracted automatically.

View this table:
  • View inline
  • View popup
Table 3:

Costs of randomized controlled trials supported by electronic health records*

Risk of bias

Three trials had no indication for high risk of bias in any of the domains assessed18,25,26 (Table 4). There were no indications for high risk of bias related to randomization or allocation concealment in any of the trials. Most trials were open-label or assessed an intervention that was not disguisable from the participants/providers, which may indicate a high risk of bias. Relevant to EHR trials, the risk for attrition bias was generally low (missing outcome data for not more than 10% of patients), and in 4 trials, all data were reported for all patients.21,27,28,32

View this table:
  • View inline
  • View popup
Table 4:

Risk of bias assessment with the Cochrane risk of bias tool15

Trials using electronic health records for intervention

Among the 172 EHR-evaluating trials (references in Appendix 3, available at www.cmajopen.ca/content/7/1/E23/suppl/DC1), the investigators measured outcomes using the EHR in 143 (83.1%), and the EHR was used as a tool for patient recruitment in 91 (52.9%) (Table 1). Computerized decision-support systems such as a physician order entry system or a clinical decision-support system were evaluated in 128 (74.4%). Personal health records were evaluated in 26 (15.1%). Telemonitoring-tethered devices measuring vital signs that were connected to the EHR were evaluated in 14 trials (8.1%), and electronic patient-reported outcomes were evaluated in 4 (2.3%) (Table 1).

Interpretation

In most of the identified trials in which EHRs were used, EHR technology itself was explored. However, we identified 17 trials that investigated an EHR-unrelated intervention and were supported by the use of EHRs for patient recruitment or outcome assessment. Most trials were published recently, indicating a rapid development in this field.

The potential of registry-based trials for comparative effectiveness research and the current state of using registries for RCTs, in particular for outcome ascertainment, has been reviewed recently.8,35 Interestingly, although the settings and implementation were similar to those identified in our sample, registry trials are most frequently performed in Scandinavian countries35 and EHR trials predominantly in North America. In addition, primary outcome data in registry trials are often collected with the use of routine data (82%), similarly to EHR trials (80%), which indicates confidence in the reliability of these data.35 Information about data quality and validity was rarely reported for registry-based trials (11%)35 and was not reported in any of the EHR-supported RCTs in our sample, which suggests similar reporting problems as in observational research based on routinely collected data.36 This may be expected given the current lack of a standardized reporting guideline for RCTs in which routinely collected data are used but also highlights a substantial transparency problem.

In most (84%) of the trials in our sample, outcomes were measured with the use of EHRs, including many of the most patient-relevant clinical outcomes, from unscheduled hospital admission to death. But there were also less pragmatic and more exploratory, mechanistic37,38 outcomes that help to understand pathophysiological processes: for example, in 1 study, EHR-extracted lipid levels were used during a trial of a lipid-lowering agent.39 We also identified a trial, the Salford lung study,33 that used routinely collected data in a prelicensing setting in the context of drug approval.

The identified EHR-supported trials were heterogeneous with regard to their targeted populations and outcomes measured, with a few exceptions. For example, over a third of this subsample were Veterans Affairs trials, in which the EHR was used for outcome and patient identification in all cases. This is likely due to the fact the Veterans Affairs has a long-established EHR system, and its widespread network allows for ease in designing and implementing these types of trials.

Another interesting finding that relates to the EHR-evaluating trials in our sample is the high proportion (about one-third) of trials in which cluster randomization was used. This indicates that EHR-based trials mostly evaluate interventions not at the patient level but, rather, more at a system level, as when aiming to redirect physician behaviour. This introduces the risk of contamination between the units of randomization (e.g., physicians) and thus requires a cluster design to be implemented.

Other than affordability, the great theoretical value of integrating the EHR into clinical trials lies in its potential for patient recruitment. For example, D’Avolio and colleagues40 reported on a Veterans Affairs pilot study that, like those identified in our sample, showed how convenient it can be to identify patients based on specific characteristics (the EHR database is “scanned,” and a list of possibly eligible patients results) and even to recruit them by sending an automatic electronic message to their clinician. Even with a lower response rate, when the contacted patients are in the order of thousands, this could lead to greater recruitment capacity, which could be of substantial value, particularly in RCTs in which difficult recruitment is suspected during planning. We found that, in almost half of the EHR-supported trials that used EHRs for recruitment, the investigators made use of more sophisticated techniques such as the proposed mechanisms of data mining. Although in some trials, patients were recruited by screening the EHR without specifying the use of a particular algorithm addition, most EHRs will require some programming to identify specific traits in the system that go beyond the basic EHR abilities (e.g., typing an International Statistical Classification of Diseases and Related Health Problems, 10th revision, Clinical Modification code for diabetes in a search window to obtain a list of patients, which can be done manually). More advanced EHR add-ons, which can screen for multiple variables at multiple levels simultaneously and continuously (i.e., screening the system every 2 hours or instantly during care for the entire time of the trial) require planning and validation. An example of such an EHR screening tool is one developed and used in the 2008 trial by Bereznicki and colleagues,18 in which this data-mining tool scrutinized the pharmacy electronic medical record based on a specified protocol (history of asthma medication’s being dispensed more frequently than guideline customs) to flag patients with poorly controlled asthma. These patients were then contacted, received educational material for self-management and were prompted to contact their care providers. This shows how using the EHR for patient identification and recruitment can be done efficiently yet requires substantial planning and software development. We provide a general framework with the various potential applications and challenges of using routinely collected data in different trial conduct phases elsewhere.3,6

The author-reported costs could support the assumption that using routinely collected data for RCTs may promote cost reduction as long as the outcome data source is already established and is not a financial responsibility of the research endeavour. In the 3 trials in which the EHR infrastructure was well established and was merely redirected for use in the trials,18,27,34 the cost per patient (median $44) was much lower than often-reported costs in traditional trials.41 The costs of the 2 trials in which the infrastructure was less integrated (such as actively screening the EHR for assessing the clinical outcome), a median of $1280 per patient, were more similar to those of traditional RCTs.24,30 A recently published overview of registry trials showed similar trial cost patterns (i.e., a reduction of costs when the outcome data did not require manual collection but, rather, the registry infrastructure was leveraged).8

Limitations

Some limitations of our study merit attention. First, we did not aim for a complete sample of all published EHR-based trials, and we searched PubMed only; rather, we aimed for a systematic, comprehensible and reproducible survey of the current literature. We used a highly sensitive search algorithm and implemented specific EHR search filters provided by the US National Library of Medicine. Nonetheless, we assume that we overlooked several pertinent publications that did not indicate in their keywords, title or abstract the application of EHRs. This may have engendered overrepresentation of EHRs used for interventions in our sample, and the observed disproportion of EHR-evaluating and EHR-supported trials needs to be interpreted with caution.

Second, searching for English-language articles indexed in PubMed alone may have created regional bias, with potential overrepresentation of Anglo-American studies. This could explain the high proportion of studies from the US. Nonetheless, substantial legislative and financial efforts have been made in North America to encourage the acquisition and employment of EHR technology, which may more likely be the reason for this critical imbalance.

Third, the trials were highly diverse, showing the various fields of EHR application, but we would need more data to further evaluate individual details and to explore, for example, the ethical constraints associated with no-consent point-of-care trials.42,43

Fourth, only 1 reviewer assessed the eligibility of the full texts and completed several parts of the data extraction, which may have introduced error in the selection of the trials. Nonetheless, we feel that the identified trials provide an overview of the mode of use of the EHR in RCTs.

Fifth, we did not test any hypothesis regarding the effect of using the EHR in trials, nor did we assess the impact of using the EHR on outcome ascertainment. Although we extracted a few characteristics that can point to the methodological quality of the studies, including an evaluation of major domains of risk of bias, we did not evaluate the treatment effects reported in the trials but merely offered a description of their use.

Finally, we obtained only a few rough cost estimates without details, which did not allow us to deduce any cost patterns; however, it provided first estimates to shed some light in this area.

Conclusion

Electronic health records are a novel and valuable addition to clinical research. There are numerous examples of how the EHR was implemented successfully in clinical research settings, supporting recruitment and outcome measurement in randomized trials. Electronic health records may be associated with lower research costs, allowing the conduct of more or larger RCTs. Altogether, these are promising developments toward more randomized real-world evidence.

Footnotes

  • Competing interests: Kimberly Mc Cord, Matthias Briel, Heiner Bucher and Lars Hemkens support the RCD for RCTs initiative, which aims to explore the use of routinely collected data for randomized clinical trials. Kimberly Mc Cord, Matthias Briel, Benjamin Speich and Lars Hemkens are members of the Making Randomized Trials More Affordable (MARTA) Group. No other competing interests were declared.

  • This article has been peer reviewed.

  • Contributors: Lars Hemkens conceived and designed the study. Kimberly Mc Cord, Hannah Ewald and Aviv Ladanie screened titles, abstracts and full-text publications. Kimberly Mc Cord extracted the data, and Kimberly Mc Cord and Lars Hemkens analyzed the data. Kimberly Mc Cord and Lars Hemkens drafted the manuscript. All of the authors interpreted the data, critically revised the manuscript for important intellectual content, gave final approval of the version to be published and agreed to be accountable for all aspects of the work.

  • Funding: This work was supported by the Stiftung Institut für klinische Epidemiologie.

  • Disclaimer: The funder had no role in the design and conduct of the study; collection, management, analysis and interpretation of the data; and preparation, review or approval of the manuscript or its submission for publication.

  • Supplemental information: For reviewer comments and the original submission of this manuscript, please see www.cmajopen.ca/content/7/1/E23/suppl/DC1.

References

  1. ↵
    1. Bothwell LE,
    2. Greene JA,
    3. Podolsky SH,
    4. et al.
    (2016) Assessing the gold standard — lessons from the history of RCTs. N Engl J Med 374:2175–81.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Zuidgeest MGP,
    2. Goetz I,
    3. Groenwold RHH,
    4. et al.
    (2017) GetReal Work Package 3. Series: Pragmatic trials and real world evidence: Paper 1. Introduction. J Clin Epidemiol 88:7–13.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Mc Cord KA,
    2. Al-Shahi Salman R,
    3. Treweek S,
    4. et al.
    (2018) Routinely collected data for randomized trials: promises, barriers, and implications. Trials 19:29.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Ramsberg J,
    2. Neovius M
    (2015) Register or electronic health records enriched randomized pragmatic trials: The future of clinical effectiveness and cost-effectiveness trials? Nordic J Health Econ 3:1–15.
    OpenUrl
  5. ↵
    (2016) PCORI-funded Pragmatic Clinical Studies projects (Patient-Centered Outcomes Research Institute, Washington) [updated 2017 Sept 26]. Available: www.pcori.org/research-results/pragmatic-clinical-studies/pcori-funded-pragmatic-clinical-studies-projects. accessed 2017 Oct 10.
  6. ↵
    1. Mc Cord KA,
    2. Hemkens LG
    (2019) Using electronic health records for clinical trials: Where do we stand and where can we go? CMAJ 191:E128–33.
    OpenUrlFREE Full Text
  7. ↵
    1. Friedman CP,
    2. Wong AK,
    3. Blumenthal D
    (2010) Achieving a nationwide learning health system. Sci Transl Med 2:57cm29.
    OpenUrlFREE Full Text
  8. ↵
    1. Li G,
    2. Sajobi TT,
    3. Menon BK,
    4. et al.,
    5. 2016 Symposium on Registry-Based Randomized Controlled Trials in Calgary
    (2016) Registry-based randomized controlled trials — What are the advantages, challenges, and areas for future research? J Clin Epidemiol 80:16–24.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Menachemi N,
    2. Collum TH
    (2011) Benefits and drawbacks of electronic health record systems. Risk Manag Healthc Policy 4:47–55.
    OpenUrlCrossRefPubMed
  10. ↵
    What is an electronic health record (EHR)? (Office of the National Coordinator for Health Information Technology, Washington) [reviewed 2018 Mar 21]. Available: www.healthit.gov/faq/what-electronic-health-record-ehr. accessed 2018 Dec 10.
  11. ↵
    (2005) Health informatics — electronic health record — definition, scope and context (International Organization for Standardization, Geneva) ISO/TR 20514:2005.
  12. ↵
    Electronic health records (Centers for Medicare & Medicaid Services, Baltimore) [modified 2012 Mar 26]. Available: www.cms.gov/Medicare/E-Health/EHealthRecords/index.html?redirect=/EhealthRecords/. accessed 2017 Nov 10.
  13. ↵
    1. Higgins JPT,
    2. Green S
    (2011) Cochrane handbook for systematic reviews of interventions Version 5.1.0, Box 6.4.a: Cochrane highly sensitive search strategy for identifying randomized trials in MEDLINE: sensitivity-maximizing version (2008 revision) (Cochrane Collaboration, Oxford (UK)) PubMed format, Availablehttp://handbook-5-1.cochrane.org. accessed 2017 Jan 1.
  14. ↵
    (2016) 2016 MeSH highlights: questions and answers (US National Library of Medicine, Bethesda (MD)) [updated 2018 Apr 27]. Available: www.nlm.nih.gov/bsd/disted/clinics/mesh_2016_qa.html. accessed 2017 Nov 10.
  15. ↵
    1. Higgins JPT,
    2. Green S
    , eds (2011) Cochrane handbook for systematic reviews of interventions Version 5.1.0, The Cochrane Collaboration’s tool for assessing risk of bias (Cochrane Collaboration, Oxford (UK)) Availablehttp://handbook-5-1.cochrane.org. accessed 2019 Jan 8.
  16. ↵
    1. Speich B,
    2. von Niederhäusern B,
    3. Blum CA,
    4. et al.,
    5. MAking Randomized Trials Affordable (MARTA) Group
    (2018) Retrospective assessment of resource use and costs in two investigator-initiated randomized trials exemplified a comprehensive cost item list. J Clin Epidemiol 96:73–83.
    OpenUrl
  17. ↵
    Realtimekurse | Aktien | Börsenkurse | Börse [exchange rates], Available: www.finanzen.ch/. accessed 2017 Nov 1.
  18. ↵
    1. Bereznicki BJ,
    2. Peterson GM,
    3. Jackson SL,
    4. et al.
    (2008) Data-mining of medication records to improve asthma management. Med J Aust 189:21–5.
    OpenUrlPubMed
  19. ↵
    1. Corson K,
    2. Doak MN,
    3. Denneson L,
    4. et al.
    (2011) Primary care clinician adherence to guidelines for the management of chronic musculoskeletal pain: results from the study of the effectiveness of a collaborative approach to pain. Pain Med 12:1490–501.
    OpenUrlCrossRefPubMed
  20. ↵
    1. de Jong J,
    2. Visser MR,
    3. Wieringa-de Waard M
    (2013) Steering the patient mix of GP trainees: results of a randomized controlled intervention. Med Teach 35:101–8.
    OpenUrl
  21. ↵
    1. Fu SS,
    2. van Ryn M,
    3. Sherman SE,
    4. et al.
    (2014) Proactive tobacco treatment and population-level cessation: a pragmatic randomized clinical trial. JAMA Intern Med 174:671–7.
    OpenUrl
  22. ↵
    1. Galbreath AD,
    2. Krasuski RA,
    3. Smith B,
    4. et al.
    (2004) Long-term healthcare and cost outcomes of disease management in a large, randomized, community-based population with heart failure. Circulation 110:3518–26.
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Gerber JS,
    2. Prasad PA,
    3. Fiks AG,
    4. et al.
    (2013) Effect of an outpatient antimicrobial stewardship intervention on broad-spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA 309:2345–52.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Green BB,
    2. Wang CY,
    3. Anderson ML,
    4. et al.
    (2013) An automated intervention with stepped increases in support to increase uptake of colorectal cancer screening: a randomized trial. Ann Intern Med 158:301–11.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Hoffman RM,
    2. Steel S,
    3. Yee EF,
    4. et al.
    (2010) Colorectal cancer screening adherence is higher with fecal immunochemical tests than guaiac-based fecal occult blood tests: a randomized, controlled trial. Prev Med 50:297–9.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Israel EN,
    2. Farley TM,
    3. Farris KB,
    4. et al.
    (2013) Underutilization of cardiovascular medications: effect of a continuity-of-care program. Am J Health Syst Pharm 70:1592–600.
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. McCarren M,
    2. Furmaga E,
    3. Jackevicius CA,
    4. et al.
    (2013) Improvement of guideline β-blocker prescribing in heart failure: a cluster-randomized pragmatic trial of a pharmacy intervention. J Card Fail 19:525–32.
    OpenUrl
  28. ↵
    1. Stewart JC,
    2. Perkins AJ,
    3. Callahan CM
    (2014) Effect of collaborative care for depression on risk of cardiovascular events: data from the IMPACT randomized controlled trial. Psychosom Med 76:29–37.
    OpenUrlAbstract/FREE Full Text
    1. Phillips CE,
    2. Rothstein JD,
    3. Beaver K,
    4. et al.
    (2011) Patient navigation to increase mammography screening among inner city women. J Gen Intern Med 26:123–9.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Piazza G,
    2. Anderson FA,
    3. Ortel TL,
    4. et al.
    (2013) Randomized trial of physician alerts for thromboprophylaxis after discharge. Am J Med 126:435–42.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Qureshi N,
    2. Armstrong S,
    3. Dhiman P,
    4. et al.,
    5. ADDFAM (Added Value of Family History in CVD Risk Assessment) Study Group
    (2012) Effect of adding systematic family history enquiry to cardiovascular disease risk assessment in primary care: a matched-pair, cluster randomized trial. Ann Intern Med 156:253–62.
    OpenUrlCrossRefPubMed
  31. ↵
    1. Skinner CS,
    2. Halm EA,
    3. Bishop WP,
    4. et al.
    (2015) Impact of risk assessment and tailored versus nontailored risk information on colorectal cancer testing in primary care: a randomized controlled trial. Cancer Epidemiol Biomarkers Prev 24:1523–30.
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Vestbo J,
    2. Leather D,
    3. Diar Bakerly N,
    4. et al.
    (2016) Effectiveness of fluticasone furoate–vilanterol for COPD in clinical practice. N Engl J Med 375:1253–60.
    OpenUrl
  33. ↵
    1. Wolf MS,
    2. Fitzner KA,
    3. Powell EF,
    4. et al.
    (2005) Costs and cost effectiveness of a health care provider-directed intervention to promote colorectal cancer screening among veterans. J Clin Oncol 23:8877–83.
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. Mathes T,
    2. Buehn S,
    3. Prengel P,
    4. et al.
    (2018) Registry-based randomized controlled trials merged the strength of randomized controlled trails and observational studies and give rise to more pragmatic trials. J Clin Epidemiol 93:120–7.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Hemkens LG,
    2. Benchimol EI,
    3. Langan SM,
    4. et al.
    (2016) The reporting of studies using routinely collected health data was often insufficient. J Clin Epidemiol 79:104–11.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Schwartz D,
    2. Lellouch J
    (1967) Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis 20:637–48.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Karanicolas PJ,
    2. Montori VM,
    3. Devereaux PJ,
    4. et al.
    (2009) A new “mechanistic-practical” framework for designing and interpreting randomized trials. J Clin Epidemiol 62:479–84.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Lester WT,
    2. Grant RW,
    3. Barnett GO,
    4. et al.
    (2006) Randomized controlled trial of an informatics-based intervention to increase statin prescription for secondary prevention of coronary disease. J Gen Intern Med 21:22–9.
    OpenUrlCrossRefPubMed
  39. ↵
    1. D’Avolio L,
    2. Ferguson R,
    3. Goryachev S,
    4. et al.
    (2012) Implementation of the Department of Veterans Affairs’ first point-of-care clinical trial. J Am Med Inform Assoc 19:e170–6.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Speich B,
    2. von Niederhäusern B,
    3. Schur N,
    4. et al.,
    5. MAking Randomized Trials Affordable (MARTA) Group
    (2018) Systematic review on costs and resource use of randomized clinical trials shows a lack of transparent and comprehensive data. J Clin Epidemiol 96:1–11.
    OpenUrl
  41. ↵
    1. Edwards SJ,
    2. Lilford RJ,
    3. Braunholtz DA,
    4. et al.
    (1998) Ethical issues in the design and conduct of randomised controlled trials. Health Technol Assess 2:i–vi, 1–132.
    OpenUrlPubMed
  42. ↵
    1. Baum M
    (1986) Do we need informed consent? Lancet 2:911–2.
    OpenUrlPubMed
  • Copyright 2019, Joule Inc. or its licensors
PreviousNext
Back to top

In this issue

CMAJ Open: 7 (1)
Vol. 7, Issue 1
1 Jan 2019
  • Table of Contents
  • Index by author

Article tools

Respond to this article
Print
Download PDF
Article Alerts
To sign up for email alerts or to access your current email alerts, enter your email address below:
Email Article

Thank you for your interest in spreading the word on CMAJ Open.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Current use and costs of electronic health records for clinical trial research: a descriptive study
(Your Name) has sent you a message from CMAJ Open
(Your Name) thought you would like to see the CMAJ Open web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Current use and costs of electronic health records for clinical trial research: a descriptive study
Kimberly A. Mc Cord, Hannah Ewald, Aviv Ladanie, Matthias Briel, Benjamin Speich, Heiner C. Bucher, Lars G. Hemkens
Jan 2019, 7 (1) E23-E32; DOI: 10.9778/cmajo.20180096

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Current use and costs of electronic health records for clinical trial research: a descriptive study
Kimberly A. Mc Cord, Hannah Ewald, Aviv Ladanie, Matthias Briel, Benjamin Speich, Heiner C. Bucher, Lars G. Hemkens
Jan 2019, 7 (1) E23-E32; DOI: 10.9778/cmajo.20180096
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • Methods and results used in the development of a consensus-driven extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE)
  • Treatment effects in randomised trials using routinely collected data for outcome assessment versus traditional trials: meta-research study
  • Access to routinely collected health data for clinical trials - review of successful data requests to UK registries
  • Google Scholar

Similar Articles

Collections

  • Clinical
    • Family Medicine, General Practice, Primary Care
      • Clinical research
  • Nonclinical
    • Medical Informatics
      • Other medical informatics
    • Patients
      • Patient Information

Content

  • Current issue
  • Past issues
  • Collections
  • Alerts
  • RSS

Authors & Reviewers

  • Overview for Authors
  • Preparing manuscripts
  • Manuscript Submission Checklist
  • Publication Fees
  • Forms
  • Editorial Policies
  • Editorial Process
  • Patient-Oriented Research
  • Submit a manuscript
  • Manuscript Progress
  • Submit a response
  • Information for Reviewers

About

  • General Information
  • Staff
  • Editorial Board
  • Contact Us
  • Advertising
  • Media
  • Reprints
  • Copyright and Permissions
  • Accessibility
  • CMA Civility Standards
CMAJ Group

Copyright 2022, CMA Impact Inc. or its licensors. All rights reserved. ISSN 2291-0026

All editorial matter in CMAJ OPEN represents the opinions of the authors and not necessarily those of the Canadian Medical Association or its subsidiaries.

To receive any of these resources in an accessible format, please contact us at CMAJ Group, 500-1410 Blair Towers Place, Ottawa ON, K1J 9B9; p: 1-888-855-2555; e: [email protected].

View CMA's Accessibility policy.

 

Powered by HighWire