Introduction
Audit and feedback (A&F) is a well-known healthcare intervention, which
can be defined as ‘any summary of clinical performance of healthcare
providers over a specific period in time’. 1 A&F has
proven to be effective and the intervention generally leads to small but
potentially important improvements in professional practice.2 For A&F to be effective a number of features, such
as the feedback frequency, are known to be important and certain
modifiable design elements have been identified to help understand the
differences in A&F interventions and to indicate gaps in reporting of
interventions. 2,3 Previous work has also defined
theory-informed hypotheses as a foundation for the development of future
A&F interventions and suggestions for improving the interventions’
effectiveness have been published. 4-6 These
hypotheses can be classified based on different aspects of the
intervention. For example, A&F interventions can be evaluated based on
recipient related aspects of the intervention, the behavior that was
targeted, delivery and content of the feedback. 4,5For feedback content, features such as the use of benchmarks for
comparative purposes and of feedback with a low cognitive load could be
important in an A&F intervention 4 Furthermore, the
credibility of the feedback, for example feedback based on good quality
evidence, has also been suggested to play an important role because of
its potential to increase recipient’s trust in the feedback.4 However, the importance of these hypotheses and
feedback features in the design of an A&F intervention needs to be
investigated. 4,5
In addition to the many studies being conducted to examine why and when
A&F are of use, research is being published on creating tools to
facilitate feedback, especially via an electronic medium.7-10 Electronic A&F can be defined as ‘the
utilization of interactive computer interfaces to provide clinical
performance summaries to healthcare professionals.’8,11,12 With the evolution in health information
technology, electronic A&F based on data stored in the electronic
health record (EHR) offers a promising evolution in A&F interventions.13,14 By automating an A&F intervention and providing
the feedback in electronical form to the healthcare provider, the number
of patients whose quality of care can be evaluated could increase
drastically, which in turn could lead to a better quality of care.15 Large data repositories are already available in
several countries and could be useful for this purpose.16-18 These databases collect routine primary
healthcare data, anonymized at the source and use it to address many
research questions of interest. 19 EHR-extractable
quality indicators are also available and can be used in an electronic
A&F intervention to evaluate and improve the care for different
diseases in primary care. 20,21
A previous systematic review on electronic A&F performed in a primary
care and hospital context investigated the effectiveness and use of
behavioral chance mechanism underlying these electronic A&F
interventions. 12 However, due to the high
heterogeneity in the included studies, the effect of the interventions
was highly variable and inconclusive. 12 Furthermore,
there is evidence that new research is not benefiting the field and that
new trials fail to explore factors responsible for A&F effectiveness22, and more in particularly that of electronic A&F.
For improving and understanding future electronic A&F interventions it
is thus important that these factors are identified and that we
understand why electronic A&F work so that an intervention can be
designed that is best suited for its needs. 23 In
addition, little is known about electronic A&F and its features that
are useful for optimizing an electronic A&F intervention in primary
care. The aim of this systematic review is therefore: 1) to assess
whether electronic A&F is effective for improving health provider
performance and healthcare outcomes in primary care and 2) to uncover
facilitating factors that contribute to the effectiveness of electronic
A&F in primary care, as proposed in previous research.
Methods
Background
The protocol of this systematic review is described in detail on
PROSPERO:
https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42018089069
Our inclusion & exclusion criteria and our search strategy were based
on the Cochrane review on A&F. 2 Although this
Cochrane review examined A&F in primary and non-primary care, we opted
to use the same strategy and criteria and applied our extra inclusion &
exclusion criteria (primary care and electronic A&F) after the search.
Our methods report adheres to guidelines of the Preferred Reporting
Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.24 (see Appendix 1)
Inclusion and exclusion criteria
Randomized controlled trials in which the intervention was set up for
primary healthcare providers responsible for patient care were included.
The interventions studied in the included RCTs had to be electronic
A&F, either alone or as a core element of a multifaceted intervention.
Electronic A&F was defined as ‘any summary, which was delivered
electronically, of clinical performance of healthcare over a specified
period of time’. To distinguish between a core and not core element of
an intervention, the same criteria used in the Cochrane review of 2012
on A&F were adopted. Non-core interventions were classified as those
that could easily be offered in the absence of the A&F component.2
As in the Cochrane review, studies in which real-time feedback was
provided during procedural skills were excluded as well as studies that
examined feedback on performance with simulated patient interactions or
studies in which the term feedback would be best classified as
‘facilitated relay’ of patient-specific clinical information.2 Randomized controlled trials that were not conducted
in a primary care setting were also excluded. Studies without full-text
availability (e.g. conference abstracts) were excluded. Studies were
also excluded if they lacked clarity as to whether feedback was
delivered electronically.
Searches
We replicated the search strategy provided in the Cochrane review on
A&F. 2 Although our systematic review only addressed
randomized controlled trials about electronic A&F in primary care, we
opted to use exactly the same search terms as those used in the Cochrane
review (see Appendix 2) but used an Elsevier-Embase search instead of an
Ovid-Embase search (see Appendix 3).
Our search included MEDLINE (Ovid) (2010 –October Week 4 2018)
(searched 25 November 2018), EMBASE (Elsevier) (2010 - October Week
4 2018) (searched 25 November 2018); CINAHL (Ebsco) (2010 –October Week
4 2018) (searched 31 October 2018) and the Cochrane Central Register of
Controlled Trials (CENTRAL) (2010 – February Week 2 2019) (searched 14
February 2019). These searches were conducted from 2010 until the
beginning of November 2018, based on the earliest publication date of
papers found during scoping searches. The CENTRAL database search had to
be repeated at a later date due to technical issues. Our search started
on 1 January 2010, in order to ensure some overlap with the results from
the Cochrane review from 2012 and to avoid articles being missed. The
search strings are available in Appendix 2.
In addition the 140 RCTs included in the Cochrane review were added to
our search results. 2
Data collection and analysis
Selection of studies
After removing duplicate references, all references were screened for
title and abstract independently by two review authors (SVDB and DS).
Randomized controlled trials were classified as inclusion, doubt or
exclusion. Disagreements were resolved by discussion. The full text of
all articles that were classified as doubt and inclusion were obtained.
Two review authors (SVDB and DS) independently read all full
manuscripts, and re-applied the inclusion criteria. If there was still
no consensus or if doubt remained after reading the full text, a third
review author (PV) was consulted to give his opinion. If doubt remained
on the form of delivered feedback after consulting the third reviewer,
the article was excluded.
Data extraction
Two independent reviewers (SVDB and DS) used a data extraction sheet to
extract the data from the included studies. This data extraction sheet
was tailored based on the Cochrane handbook and the EPOC data collection
checklist. 2526 A separate data extraction file was
made for dichotomous and for continuous data.
Audit and feedback features which are known to be important or were
suggested by other authors as potentially facilitating A&F
interventions, were also incorporated on our data extraction sheet.2-5 These features were: feedback frequency,
evidence-based aspect of the feedback (yes, no or unclear), the use of
benchmarks as comparisons in the feedback (yes, no or unclear) and the
cognitive load of the feedback (does the feedback have a low cognitive
load: yes, no or unclear). Interventions with feedback consisting of
many graphs and/or text were considered as having a high cognitive load,
while interventions with few graphs and no unnecessary in-depth elements
or text, were considered as having a low cognitive load.
Discrepancies were resolved by discussion. If no consensus was reached,
another reviewer (PV) was consulted. In case of missing data the first
author was contacted. For each article standard data were extracted,
such as authors, year of publication & the year of data collection,
study design, number of participants, type of participants, duration of
the trial, type of intervention, how this intervention was organized
(e.g. No. randomized participants, providers, delivery,…) and
outcome (including dichotomous, continuous or other outcome). (see
Appendix 4 and 5 for the data extraction sheets for continuous and
dichotomous outcomes, respectively)
Data analysis
If appropriate, a meta-analysis was carried out. If not, the results
were described narratively. A meta-analysis was carried out if there
were at least two studies with a similar intervention, in a similar
population, which addressed similar outcomes and if sufficient data were
available. If high heterogeneity was found, the meta-analysis was not
reported since the results would be unreliable.
Risk of bias assessment
Included in the data extraction sheet was a list used by two independent
reviewers (SVDB and DS) to assess the risk of bias. This list was
tailored based on the Cochrane Collaboration tool for assessing the risk
of bias in randomized trials. 27 Discrepancies in the
findings were solved by consensus or by consulting a third reviewer (PV)
if consensus was not possible. For our risk of bias summary, blinding of
participants and personnel (performance bias) was not considered a key
domain since the nature of an A&F intervention makes blinding
difficult. The risk of performance bias was therefore not used to
calculate the summarized risk of bias of the different studies. However,
all of the other forms of bias were considered key domains and if one of
them had a high or an unclear risk of bias, the summary was considered
as having a high or an uncertain risk of bias, respectively.
Results
Searches
In this systematic review, a total of 12,054 records were identified
through database screening. The 140 articles from the Cochrane review
(on A&F in primary and non-primary care) were also included, which
resulted in a list of 8,744 records that were screened after removing
the duplicates. (see Figure 1)
Data collection and
analysis
Selection of studies
In total, 8,313 records were excluded because they did not meet the
inclusion criteria. Most of them were excluded because there was no
(electronic) A&F intervention or because they were not conducted in a
primary care setting. A total of 431 full-text articles were assessed
for eligibility and 402 articles were excluded (see Figure 1). The total
number of studies included through database searching was 2328-50 and an additional 6 51-56articles published before 2010 were included from our screening of the
Cochrane review published in 2012. One article was available as a
conference abstract in the 2012 Cochrane review but the full-text
article was included, which was identified through our database search
and published in 2012. 50
Insert Figure 1: PRISMA flow-chart
Description of studies and electronic feedback
features
The standard data we extracted showed 12 articles (41%) with continuous28,32-35,37-39,43,51,54,56 and 17 articles (59%) with
dichotomous outcome measures29-31,36,40-42,44-50,52,53,55. There was a high
heterogeneity in the outcome measures of the trials and a wide range of
clinical conditions were targeted by the interventions. Examples of
outcome measures included the proportion of patients in compliance with
guidelines for dental problems, the total number of antibiotic items
dispensed, a composite measure of clinically significant depression,
etc. (see Table 1 and 2) The targeted clinical conditions included for
example diabetes, depression, preventive medicine, hypertension
management, etc. (see Table 1 and 2). The trials usually had a cluster
RCT design although 5 studies (17%) used an RCT design34,36,37,52,55. The interventions mostly included
general physicians but there were also 2 trials (7%) aimed at dentists35,54 and 1 trial (3.5%) at pharmacists36. Patients were mostly the unit of analysis (19
studies, 65.5%), but some studies also used the providers (7 studies,
24%) or the distribution/prescriptions of medication (2 studies, 7%)
as the unit of analysis. Finally, one study (3.5%) analyzed both data
on patient and provider level (see Table 1 for continuous outcomes and
Table 2 for dichotomous outcomes).
Insert: Table 1 + table 2
The data on the different features of electronic feedback showed 12
studies (41%) where feedback was provided less than monthly29,34,35,37,40,42,43,47,50,51,53,54, 11studies (38%)
where the frequency of the feedback was unclear30,32,33,41,44-46,48,52,55,56, 4 (14%) with feedback
provided monthly 28,38,39,49, 1 (3.5%) with weekly
feedback 31 and 1 study (3.5%) where feedback was
delivered only once 36. In 19 studies (65.5%) the
feedback was evidence-based28,31-36,39-43,45,46,49,51,54-56. The evidence-based
quality of the feedback was unclear in 9 studies (31%)29,30,37,38,44,47,50,52,53 and 1 study (3.5%) had a
low evidence base of the feedback 48. The use of
benchmarks as a comparison in the feedback was present in 20 studies
(69%) 28,29,32,34-36,39-41,43-47,49,51-53,55,56,
unclear in 7 (24%) 30,31,37,42,48,50,54. Only 2
studies (7%) did not use benchmarks as a comparison in their feedback33,38. The cognitive load of the feedback was low in
12 studies (41%) 29,32,33,35,37,39,41,43,46,49,53,56,
high in 3 (10.5%) 34,36,40 and unclear in 14 studies
(48.5%) 28,30,31,38,42,44,45,47,48,50-52,54,55.
Finally, the direction of change was to increase behavior in 18 (62%)28,30,33,36,38,39,41,42,45-48,51-56 and to decrease
behavior in eleven studies (38%)29,31,32,34,35,37,40,43,44,49,50. (see Table 3 for
studies with continuous outcomes and Table 4 for studies with
dichotomous outcomes)
Insert: Table 3 + table 4
Results data analysis
Twenty-two studies (76%) showed an effect of the intervention28-31,35-41,43-46,48,50-53,55,56, of which 3 studies
only had a partial effect (10.5%) 36,41,51, and 7
(24%) without any significant effect32-34,42,47,49,54. There were 3 studies (10.5%)35,39,43 that met all the different characteristics of
the feedback we examined (the feedback was evidence-based, provided more
than once with the use of benchmarks as a comparison and with a low
cognitive load) and that were effective while 1 study (3.5%) with the
same feedback features showed no effect. 49
Of these 3 studies with an effect of the intervention, Elouafkaoui et
al. investigated the effectiveness of an electronic A&F intervention on
the prescription of antibiotics by dentists. This resulted in a 5.7%
reduction (95% CI -1.1% to -10.2%) in the antibiotics prescription
rate in the intervention group relative to the control group.35 Furthermore, Hayashino et al. evaluated the
effectiveness of a multifaceted intervention, consisting of monthly
feedback reports using the Achievable Benchmark of Care method, on the
technical quality of diabetes care by primary care physicians. This
improved the quality of care with 19.0%-point (95% confidence interval
16.7%- to 21.3%-point; P < 0.001). 39Finally, Gerber et al. studied the effect of a multifaceted
intervention, consisting of education and quarterly A&F, on the
prescription of antibiotics for acute respiratory infections by primary
care pediatricians and showed that broad-spectrum antibiotics
prescription decreased from 26.8% to 14.3% (absolute difference,
12.5%) among intervention practices vs from 28.4% to 22.6% (absolute
difference, 5.8%) in controls. 43
However, because of the high heterogeneity of our results no
meta-analysis was performed since the results would be inconclusive.
Risk of bias assessment
There was a high risk of performance bias in 17 of the included studies
(59%), while the risk of selection and detection bias was minimal. The
risk of both attrition and reporting bias was high in 6 different
studies (21%). (see figure 2). To summarize, 4 studies (14%) had a low
overall risk of bias 28,30,34,40, while 12 studies
(41%) had a high risk 29,32,37-39,41,45-47,51-53 and
13 studies (45%) had an unclear risk of bias.31,33,35,36,42-44,48-50,54-56 (see Figure 3)
Out of the 4 articles with a low risk of bias summary, 3 included
feedback features which are known to be effective (feedback provided
more than once) or were suggested as potentially important for improving
A&F interventions (evidence-based feedback with the use of benchmarks).
Insert Figure 2: Risk of bias graph: review authors’ judgements about
each risk of bias item presented as percentages across all included
studies.
Insert Figure 3: Risk of bias summary: review authors’ judgements about
each risk of bias item for each included study.
Discussion
Principal findings and comparison with previous
work
This systematic review identified 29 articles describing an electronic
A&F intervention in primary care. Overall, 22 studies (76%) showed an
effect of the intervention on outcome measures such as the change in
systolic blood pressure, medication prescriptions, the proportion of
patients with a medication error and the change in the proportion of
patients treated with oral anticoagulants. Three of these studies
(10.5%) included all the features of the feedback that were
investigated (the feedback was evidence-based, had a low cognitive load,
used benchmarks as a comparison and was provided more than once).35,39,43 The interventions in these 3 studies targeted
behaviors such as prescribing antibiotics for infections by dentists,
improving the technical quality of diabetes care by primary care
physicians and prescribing antibiotics for respiratory tract infections
by primary care pediatricians. However, there was a high heterogeneity
in the primary outcomes of these studies and the electronic A&F
interventions were designed very diversely with various feedback
features, making a meta-analysis unreliable. Because of this, we were
also unable to make generalizable claims about the importance of the
feedback features we examined. Furthermore, only 4 studies (14%) had a
low risk of bias summary, not counting performance bias. In addition,
these 4 articles had been published more recently from 2016 onwards,
possibly indicating some new evidence toward a maturing methodology in
the A&F research field. 28,30,34,40
In general these findings confirm the overall stagnation in A&F
research, as described by other authors 22 and show
there is insufficient research on implementation so as to further the
field and build further on existing knowledge. 23Previous work showed that feedback is best provided more than once and
our findings indicate this is only the case in 12 of the included
studies (41%). However, 3 out of 4 articles with a low risk of bias
summary that were from a more recent publication date, included feedback
features which are known to be effective (feedback provided more than
once) or were suggested to be potentially important in improving A&F
interventions (the feedback was evidence-based and included use of
benchmarks as a comparison). 28,34,40 Hence, despite
the stagnation described in the past, more recent publications were of
high quality and built on existing research, which could indicate a
trend towards reinvigorating A&F research. These findings also
correspond with the latest innovation to investigate the effectiveness
of A&F interventions, the implementation laboratories.57 Implementation laboratories are being developed to
promote collaborative research between healthcare system partners &
researchers and to create an opportunity for experimentation. These
laboratories thus aim to produce generalizable knowledge about how to
optimize A&F. Internationally, these implementation laboratories are
united in a ‘meta-laboratory’ approach to facilitate cumulative research
in the field of A&F research. 57
Although A&F, and more precisely electronic A&F, were studied
extensively in primary care, a meta-analysis to pool the results and
produce some generalizable data was not feasible. This emphasizes the
difficulties in designing complex healthcare interventions and the need
for a framework and a well-defined research agenda when setting up
electronic A&F trials so that interventions can be reproduced and
compared. 23,58 Designing a methodology for developing
generalizable automated A&F interventions in primary care could be
useful for this purpose since automated quality assessment based on EHR
data offers promising prospects if the challenges are answered.15 Another important challenge when using EHR data is
the completeness of these data. Provision of data quality feedback could
improve this. 59 After all, if the data stored in the
EHR are not complete, using them for an electronic A&F intervention
will produce unreliable results.
Large data repositories, such as those of the Dutch institute for
research of healthcare (NIVEL), the British Royal College of General
Practitioners (RGCP) Research and Surveillance Centre (RSC) network, and
the Belgian INTEGO database, have already been available for many years
in primary care 17,60,61. Using the facilities of
these institutes in a well-designed trial with a standardized
methodology could address some of the problems in evaluating the
effectiveness and features of electronic A&F interventions. In this
respect, recent research indicates the need for an evolution from a
two-arm trial of A&F versus control to head-to-head trials of various
A&F variants to measure small differences in effectiveness of different
A&F features. 57 Such trials need to be sufficiently
powered, requiring large sample sizes which could be provided by these
large primary care data repositories. 57 However,
further research, mainly into describing a methodology for an automated
and EHR-based A&F intervention in primary care, is necessary. Designing
and using a standardized methodology to create automated A&F
interventions based on EHR data could allow comparison of future
electronic A&F interventions. They could be used to investigate
different features of the intervention, which in turn could advance the
field of A&F research in general.
Strengths and
limitations
To our knowledge, this is the first systematic review that investigated
electronic A&F only in primary care. One of the strengths of this
review is our search, which was identical to the last Cochrane review.
By replicating the search strings of the Cochrane review, followed by
screening abstracts and full-text articles based on our in- and
exclusion criteria, this review had a broad basis. Our search led to a
higher number of articles that were screened for in-or exclusion based
on abstract (n=8744) and on full-text (n=431) compared with that of a
previous review performed in a primary and non-primary care setting.12 This method reduced the risk that relevant articles
were missed.
Our review also has several limitations. Because our results showed high
heterogeneity, no meta-analysis could be meaningfully performed and no
generalizable data could be produced. Therefore, results were described
narratively. Our definition of electronic A&F was strict and articles
for which it was unclear if the A&F intervention was performed
electronically were excluded, thus possibly missing some relevant
articles. However, compared with a previous review on electronic A&F,
which included 7 articles, our review included a higher number of
articles since studies in which electronic A&F was part of a
multifaceted intervention were also withheld. 12Finally, for the calculation of our risk of bias summary every form of
bias was considered as a key domain, except for performance bias, which
may have produced too severe an overall risk of bias evaluation.
Conclusion
This systematic review included 29 articles describing an electronic
A&F intervention in primary care, of which 76% showed an effect of the
intervention on outcome measures such as change in systolic blood
pressure, medication prescriptions, proportion of patients with a
medication error and change in the proportion of patients treated with
oral anticoagulants. Approximately 10% of the studies included all the
facilitating feedback conditions we examined and showed an effect of the
intervention, particularly on the prescription of antibiotics by
dentists & primary care physicians and on the technical quality of
diabetes care. There was a high heterogeneity in the results, making a
meta-analysis unreliable. The design of the A&F interventions showed a
great variability and overall, our results confirmed the previously
described stagnation in the field of A&F research. However, 4 recent
publications with a low risk of bias showed a positive evolution in the
design and description of A&F interventions. Developing a framework or
methodology for automated A&F interventions in primary care could be
useful for necessary future research.
References
1. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit
and feedback: effects on professional practice and health care outcomes.The Cochrane database of systematic reviews. 2006(2):Cd000259.
2. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects
on professional practice and healthcare outcomes. The Cochrane
database of systematic reviews. 2012;6:Cd000259.
3. Colquhoun H, Michie S, Sales A, et al. Reporting and design elements
of audit and feedback interventions: a secondary review. BMJ
quality & safety. 2016.
4. Colquhoun HL, Carroll K, Eva KW, et al. Advancing the literature on
designing audit and feedback interventions: identifying theory-informed
hypotheses. Implementation Science. 2017;12(1):117.
5. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice Feedback
Interventions: 15 Suggestions for Optimizing Effectiveness. Annals
of internal medicine. 2016;164(6):435-441.
6. Brown B, Gude WT, Blakeman T, et al. Clinical Performance Feedback
Intervention Theory (CP-FIT): a new theory for designing, implementing,
and evaluating feedback in health care based on a systematic review and
meta-synthesis of qualitative research. Implementation science :
IS. 2019;14(1):40.
7. Mould DR, Upton RN, Wojciechowski J. Dashboard Systems: Implementing
Pharmacometrics from Bench to Bedside. The AAPS Journal.2014;16(5):925-937.
8. Waitman LR, Phillips IE, McCoy AB, et al. Adopting real-time
surveillance dashboards as a component of an enterprisewide medication
safety strategy. Jt Comm J Qual Patient Saf. 2011;37(7):326-332.
9. Khairat SS, Dukkipati A, Lauria HA, Bice T, Travers D, Carson SS. The
Impact of Visualization Dashboards on Quality of Care and Clinician
Satisfaction: Integrative Literature Review. JMIR Hum Factors.2018;5(2):e22.
10. Karami M, Langarizadeh M, Fatehi M. Evaluation of Effective
Dashboards: Key Concepts and Criteria. Open Med Inform J.2017;11:52-57.
11. Brehaut JC, Eva KW. Building theories of knowledge translation
interventions: use the entire menu of constructs. Implementation
science : IS. 2012;7:114.
12. Tuti T, Nzinga J, Njoroge M, et al. A systematic review of
electronic audit and feedback: intervention effectiveness and use of
behaviour change theory. Implementation Science. 2017;12(1):61.
13. Patel S, Rajkomar A, Harrison JD, et al. Next-generation audit and
feedback for inpatient quality improvement using electronic health
record data: A cluster randomised controlled trial. BMJ Quality
and Safety. 2018;27(9):691-699.
14. Gulliford MC, Prevost AT, Charlton J, et al. Effectiveness and
safety of electronically delivered prescribing feedback and decision
support on antibiotic use for respiratory illness in primary care:
REDUCE cluster randomised trial. BMJ (Clinical research ed).2019;364:l236.
15. Roth CP, Lim YW, Pevnick JM, Asch SM, McGlynn EA. The challenge of
measuring quality of care from the electronic health record.American journal of medical quality : the official journal of the
American College of Medical Quality. 2009;24(5):385-394.
16. Bartholomeeusen S, Kim C-Y, Mertens R, Faes C, Buntinx F. The
denominator in general practice, a new approach from the Intego
database. Family Practice. 2005;22(4):442-447.
17. Schweikardt C, Verheij RA, Donker GA, Coppieters Y. The historical
development of the Dutch Sentinel General Practice Network from a
paper-based into a digital primary care monitoring system. Journal
of Public Health. 2016;24(6):545-562.
18. Clinical Practice Research Datalink. https://www.cprd.com/home/.
Accessed August 18, 2019.
19. Verheij AR, Curcin V, Delaney CB, McGilchrist MM. Possible Sources
of Bias in Primary Care Electronic Health Record Data Use and Reuse.J Med Internet Res. 2018;20(5):e185.
20. Smets M, Smeets M, Van den Bulck S, Janssens S, Aertgeerts B, Vaes
B. Defining quality indicators for heart failure in general practice.Acta Cardiol. 2018:1-8.
21. Van den Bulck SA, Vankrunkelsven P, Goderis G, et al. Development of
quality indicators for type 2 diabetes, extractable from the electronic
health record of the general physician. A rand-modified Delphi method.Primary Care Diabetes. 2019.
22. Ivers NM, Grimshaw JM, Jamtvedt G, et al. Growing literature,
stagnant science? Systematic review, meta-regression and cumulative
analysis of audit and feedback interventions in health care.Journal of general internal medicine. 2014;29(11):1534-1541.
23. Ivers NM, Sales A, Colquhoun H, et al. No more ’business as usual’
with audit and feedback interventions: towards an agenda for a
reinvigorated intervention. Implementation science : IS.2014;9:14.
24. Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred
Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA
Statement. PLOS Medicine. 2009;6(7):e1000097.
25. EPOC C. Cochrane Effective Practice and Organisation of Care Review
Group. Data Collection Checklist. In:2002.
26. Higgins JPT, Green S (editors). Cochrane Handbook for Systematic
Reviews of Interventions. Chichester (UK): John Wiley & Sons, 2011.
27. Higgins JPT, Altman DG, Gøtzsche PC, et al. The Cochrane
Collaboration’s tool for assessing risk of bias in randomised trials.BMJ (Clinical research ed). 2011;343.
28. Patel MS, Kurtzman GW, Kannan S, et al. Effect of an Automated
Patient Dashboard Using Active Choice and Peer Comparison Performance
Feedback to Physicians on Statin Prescribing: The PRESCRIBE Cluster
Randomized Clinical Trial. JAMA network open. 2018;1(3):e180818.
29. Lim WY, Hss AS, Ng LM, et al. The impact of a prescription review
and prescriber feedback system on prescribing practices in primary care
clinics: a cluster randomised trial. BMC family practice.2018;19(1):N.PAG-N.PAG.
30. Vinereanu D, Lopes Renato D, Bahit MC, et al. A multifaceted
intervention to improve treatment with oral anticoagulants in atrial
fibrillation (IMPACT-AF): an international, cluster-randomised trial.Lancet (London, England). 2017;390 North American
Edition(10104):1737‐1746.
31. Urbiztondo I, Bjerrum L, Caballero L, Suarez MA, Olinisky M, Córdoba
G. Decreasing inappropriate use of antibiotics in primary care in four
countries in south america—cluster randomized controlled trial.Antibiotics. 2017;6(4).
32. Trietsch J, van Steenkiste B, Grol R, et al. Effect of audit and
feedback with peer review on general practitioners’ prescribing and test
ordering performance: a cluster-randomized controlled trial. BMC
family practice. 2017;18(1):53.
33. Holt TA, Dalton A, Marshall T, et al. Automated Software System to
Promote Anticoagulation and Reduce Stroke Risk: Cluster-Randomized
Controlled Trial. Stroke. 2017;48(3):787-790.
34. Hemkens LG, Saccilotto R, Reyes SL, et al. Personalized prescription
feedback using routinely collected data to reduce antibiotic use in
primary care a randomized clinical trial. JAMA Internal Medicine.2017;177(2):176-183.
35. Elouafkaoui P, Young L, Newlands R, Duncan E, Elders A, Clarkson J.
An audit and feedback intervention for reducing antibiotic prescribing
in general dental practice: the RAPiD cluster randomised controlled
trial. PLoS medicine. 2017;13(8).
36. Winslade N, Eguale T, Tamblyn R. Optimising the changing role of the
community pharmacist: A randomised trial of the impact of audit and
feedback. BMJ open. 2016;6(5).
37. Sarafi NA, Farrokhi NM, Haghdoost A, Bahaadinbeigy K, Abu-Hanna A,
Eslami S. The effect of registry-based performance feedback via short
text messages and traditional postal letters on prescribing parenteral
steroids by general practitioners-A randomized controlled trial.International journal of medical informatics. 2016;87:36-43..
38. Murphy D, Wu L, Thomas E, Forjuoh S, Meyer A, Singh H. Electronic
Trigger-Based Intervention to Reduce Delays in Diagnostic Evaluation for
Cancer: a Cluster Randomized Controlled Trial. Journal of clinical
oncology. 2016;33(31):3560-3567..
39. Hayashino Y, Suzuki H, Yamazaki K, Goto A, Izumi K, Noda M. A
cluster randomized trial on the effect of a multifaceted intervention
improved the technical quality of diabetes care by primary care
physicians: The Japan Diabetes Outcome Intervention Trial-2 (J-DOIT2).Diabetic Medicine. 2016;33(5):599-608.
40. Guthrie B, Kavanagh K, Robertson C, et al. Data feedback and
behavioural change intervention to improve primary care prescribing
safety (EFIPPS): multicentre, three arm, cluster randomised controlled
trial. BMJ (Online). 2016;354(no pagination).
41. Peiris D, Usherwood T, Panaretto K, et al. Effect of a
computer-guided, quality improvement program for cardiovascular disease
risk management in primary health care: the treatment of cardiovascular
risk using electronic decision support cluster-randomized trial.Circulation Cardiovascular quality and outcomes. 2015;8(1):87-95.
42. Ogedegbe G, Tobin JN, Fernandez S, et al. Counseling African
Americans to Control Hypertension: cluster-randomized clinical trial
main effects. Circulation. 2014;129(20):2044-2051.
43. Gerber JS, Prasad PA, Fiks AG, et al. Effect of an outpatient
antimicrobial stewardship intervention on broad-spectrum antibiotic
prescribing by primary care pediatricians: a randomized trial.JAMA. 2013;309(22):2345-2352.
44. Almeida O, Pirkis J, Kerse N, et al. A randomized trial to reduce
the prevalence of depression and self-harm behavior in older primary
care patients. Annals of family medicine. 2012;10(4):347-356.
45. Pape G, Hunt J, Butler K, et al. Team-based care approach to
cholesterol management in diabetes mellitus: 2-Year cluster randomized
controlled trial. Archives of internal medicine.2011;171(16):1480-1486.
46. Guldberg T, Vedsted P, Kristensen J, Lauritzen T. Improved quality
of Type 2 diabetes care following electronic feedback of treatment
status to general practitioners: a cluster randomized controlled trial.Diabetic medicine : a journal of the British Diabetic
Association. 2011;28(3):325-332.
47. Estrada CA, Safford MM, Salanitro AH, et al. A web-based diabetes
intervention for physician: a cluster-randomized effectiveness trial.International Journal for Quality in Health Care.2011;23(6):682-689.
48. Ornstein S, Nemeth LS, Jenkins RG, Nietert PJ. Colorectal cancer
screening in primary care: translating research into practice.Medical care. 2010;48(10):900-906.
49. Linder J, Schnipper J, Tsurikova R, et al. Electronic health record
feedback to improve antibiotic prescribing for acute respiratory
infections. The American journal of managed care. 2010;16(12
Suppl HIT):e311-319.
50. Avery AJ, Rodgers S, Cantrill JA, et al. A pharmacist-led
information technology intervention for medication errors (PINCER): a
multicentre, cluster randomised, controlled trial and cost-effectiveness
analysis. The Lancet. 2012;379(9823):1310-1319.
51. Svetkey LP, Pollak KI, Yancy WS, Jr., et al. Hypertension
improvement project: randomized trial of quality improvement for
physicians and lifestyle modification for patients. Hypertension.2009;54(6):1226-1233.
52. Mold JW, Aspy CA, Nagykaldi Z. Implementation of evidence-based
preventive services delivery processes in primary care: an Oklahoma
Physicians Resource/Research Network (OKPRN) study. J Am Board Fam
Med. 2008;21(4):334-344.
53. Wadland WC, Holtrop JS, Weismantel D, Pathak PK, Fadel H, Powell J.
Practice-based referrals to a tobacco cessation quit line: assessing the
impact of comparative feedback vs general reminders. Annals of
family medicine. 2007;5(2):135-142.
54. Bahrami M, Deery C, Clarkson JE, et al. Effectiveness of strategies
to disseminate and implement clinical guidelines for the management of
impacted and unerupted third molars in primary dental care, a cluster
randomised controlled trial. Br Dent J. 2004;197(11):691-696;
discussion 688.
55. Bonevski B, Sanson-Fisher RW, Campbell E, Carruthers A, Reid AL,
Ireland M. Randomized controlled trial of a computer strategy to
increase general practitioner preventive care. Preventive
Medicine. 1999;29(6 Pt 1):478-486.
56. McAlister NH, Covvey HD, Tong C, Lee A, Wigle ED. Randomised
controlled trial of computer assisted management of hypertension in
primary care. British medical journal (Clinical research ed).1986;293(6548):670-674.
57. Grimshaw JM, Ivers N, Linklater S, et al. Reinvigorating stagnant
science: implementation laboratories and a meta-laboratory to
efficiently advance the science of audit and feedback. BMJ Quality
&amp; Safety. 2019;28(5):416.
58. Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and
evaluation of complex interventions to improve health. BMJ
(Clinical research ed). 2000;321(7262):694.
59. van der Bij S, Khan N, Ten Veen P, de Bakker DH, Verheij RA.
Improving the quality of EHR recording in primary care: a data quality
feedback tool. Journal of the American Medical Informatics
Association : JAMIA. 2016.
60. de Lusignan S, Correa A, Smith GE, et al. RCGP Research and
Surveillance Centre: 50 years’ surveillance of influenza, infections,
and respiratory conditions. The British journal of general
practice : the journal of the Royal College of General Practitioners.2017;67(663):440-441.
61. Truyers C, Goderis G, Dewitte H, Akker M, Buntinx F. The Intego
database: background, methods and basic results of a Flemish general
practice-based continuous morbidity registration project. BMC
medical informatics and decision making. 2014;14:48.
Acknowledgements
SVDB, BV, GG, RH and PV contributed to the design and conceptualization
of the study.
SVDB performed the search
SVDB, DS and PV performed the screening, data extraction and risk of
bias assessment
SVDB, DS, BV, GG, RH and PV reviewed and edited the manuscript
The authors would like to thank dr. Anne-Catherine Vanhove for her
assistance with the search
Conflict of Interest
None declared
Abbreviations
A&F: Audit & Feedback
EHR: Electronic Health Record
RCT: Randomized Controlled Trial
Figure legends
Figure 1: PRISMA flow-chart
Figure 2: Risk of bias graph: review authors’ judgements about each risk
of bias item presented as percentages across all included studies.
Figure 3: Risk of bias summary: review authors’ judgements about each
risk of bias item for each included study.
Appendices
Appendix 1: PRISMA checklist
Appendix 2: Search strings and results
Appendix 3: Elsevier-Embase search: syntax used for translation
Appendix 4: Data extraction sheet continuous outcomes
Appendix 5: Data extraction sheet dichotomous outcomes
Tables
Table 1: Studies with continuous outcomes