Canadian Association of Radiologists Diagnostic Imaging Referral Guidelines: a guideline development protocol ============================================================================================================= * Candyce Hamel * Ryan Margau * Paul Pageau * Marc Venturi * Leila Esmaeilisaraji * Barb Avard * Sam Campbell * Noel Corser * Nicolas Dea * Edmund Kwok * Cathy MacLean * Erin Sarrazin * Charlotte J. Yong-Hing * Kaitlin Zaki-Metias ## Abstract **Background:** Comprehensive diagnostic imaging referral guidelines are an important tool to assist referring clinicians and radiologists in determining the safest and best-clinical-value diagnostic imaging study for their patients; the Canadian Association of Radiologists (CAR) last produced its diagnostic imaging referral guidelines in 2012. In partnership with several national organizations, referring clinicians, radiologists, and patient and family advisors from across Canada, the association is redoing its referral guidelines using a new methodology for guideline development, and these guideline recommendations will be suited for integration into clinical decision support systems. **Methods:** Expert panels of radiologists, referring clinicians and a patient advisor will work with epidemiologists at the CAR to create guidelines across 13 clinical sections. The expert panel for each section will first create a comprehensive list of clinical and diagnostic scenarios to include in the guidelines. Canadian Association of Radiologists epidemiologists will then conduct a systematic rapid scoping review to identify systematically produced guidelines from other guideline groups. The corresponding expert panel will develop diagnostic imaging recommendations for each clinical and diagnostic scenario using the recommendations identified from the scoping review and contextualize them to the Canadian health care systems. The expert panels will accomplish this using an adapted Grading of Recommendations Assessment, Development and Evaluation framework, which reflects the benefits and harms, values and preferences, equity, accessibility, resources and cost. **Interpretation:** Freely available, up-to-date, comprehensive Canadian-specific diagnostic imaging referral guidelines are needed. A transparent and structured guideline-development approach will aid the CAR and its partners in producing guidelines across its 13 sections. A 2022 systematic review, which included 34 studies evaluating practices undertaken by health care professionals in a Canadian health care setting, reported that diagnostic imaging was underused or overused a median of 13.8% of the time (interquartile range 4.5%–29.0%).1 This over- and underuse of diagnostic imaging may result in iatrogenic harms to the patient, longer wait times, poorer health outcomes due to delays in diagnosis, and inefficient use of scarce health care resources.1,2 Demand for diagnostic and medical imaging is increasing as the Canadian population ages. In fact, the number of computed tomography and magnetic resonance imaging examinations is expected to more than double in the period from 2017 to 2040.3 Imaging referral guidelines can be an important tool in ensuring that patients get the safest and best-clinical-value diagnostic imaging study at the right time.4,5 Trustworthy guidelines, and the recommendations within, should be evidence based and developed using rigorous methodology.6 Guidelines developed for other countries (e.g., from the American College of Radiology [ACR] and the Royal College of Radiologists [RCR] in the United Kingdom) can serve as an important reference. However, Canadian guidelines are required to ensure that geographic distribution, population characteristics and the structure of the health care systems are considered in the guideline-development process. This highlights the need to develop country-specific, systematically produced diagnostic imaging referral guidelines. In 2012, the Canadian Association of Radiologists (CAR) produced a comprehensive set of guideline recommendations for diagnostic imaging referral.7 These recommendations were categorized into 13 sections and included recommendations for 338 clinical and diagnostic scenarios (Table 1). In some instances, sections cover specific anatomy or organ systems (e.g., head and neck, musculoskeletal system); in other instances, the sections refer to clinical or referral pathways or scenarios (e.g., trauma, pediatrics). The 2012 guidelines are now more than a decade old and must be revised to reflect updated evidence. Guideline methodology has also evolved over this time, offering new, robust approaches. Additionally, as referring clinicians are the primary users of these guidelines, we want to ensure their involvement in the development process. Last, for suitable integration into clinical decision support systems, the guideline recommendations format needed to be modified. A clinical decision support system is defined as “any software designed to directly aid in clinical decision making in which characteristics of individual patients are matched to a computerized knowledge base for the purpose of generating patient-specific assessments or recommendations that are then presented to clinicians for consideration.”8 View this table: [Table 1:](http://www.cmajopen.ca/content/11/2/E248/T1) Table 1: Sections of the 2012 Canadian Association of Radiologists recommendations In 2020, the CAR, in collaboration with the Canadian Medical Association through an unrestricted sponsorship grant, developed a plan to update these referral guidelines, tailored to the Canadian health care context. An oversight working group was created, made up of radiologists, referring medical professionals (e.g., physicians, nurse practitioners), and a patient and family advisor. The working group formed partnerships with national associations, including the Canadian Association of Emergency Physicians, The College of Family Physicians of Canada, Choosing Wisely, the Nurse Practitioner Association of Canada, and the Society of Rural Physicians of Canada. A systematic rapid scoping review will inform each section, and each sectional expert panel will formulate recommendations using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework9,10 as guidance, adapted where necessary. The CAR working group has opted to use the concepts found in GRADE for guidelines, as it is a robust framework that considers contextual criteria when formulating recommendations. 11 These include the desirable and undesirable effects and the balance of these effects, values and preferences, equity, accessibility, resources required and costs. It is important to note that guidelines cannot always account for variability between patients (e.g., patient values). The recommendations developed as part of this initiative are not intended to replace the clinical expertise and judgment of the referring clinician, but to provide guidance. Depending on the clinical scenario, expert opinion may supplement or override the recommendation. In this article, we describe the process and methodology for developing the CAR Diagnostic Imaging Referral Guidelines. The robust methodology described in this protocol also presents a guide for other organizations and associations to collaboratively develop rapid guidelines. ## Methods ### Guideline development Figure 1 displays the overall schematic of the guideline development process. ![Figure 1:](http://www.cmajopen.ca/https://www.cmajopen.ca/content/cmajo/11/2/E248/F1.medium.gif) [Figure 1:](http://www.cmajopen.ca/content/11/2/E248/F1) Figure 1: Project flow diagram. Note: CAR = Canadian Association of Radiologists, GRADE = Grading of Recommendations Assessment, Development and Evaluation. #### Recruitment of expert panel Each of the 13 sections is represented by an expert panel, composed of 6 to 9 members. The expert panel is led by a chair (or co-chairs) with representation from radiologists, referring clinicians, at least 1 patient advisor and a guideline methodologist with geographic representation from across Canada.12 Members of the working group will provide candidates for the expert panel chair and other expert panel members. Recruited expert panel members may also provide names of other candidate expert panel members (see Appendix 1, available at [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1), for additional details). Two CAR epidemiologists will conduct the rapid scoping reviews, and the senior epidemiologist (C.H.) will serve as the guideline methodologist. Following the Guidelines International Network (GIN)-McMaster Guideline Development Checklist,12 members of each expert panel will complete and sign a conflict of interest form, which includes any financial, intellectual or academic conflicts of interest. We will use the CAR conflicts of interest policy to manage any potential conflicts of interest. Expert panel members will also receive and sign a terms of reference document, which describes the purpose of the project and mandate of the project and of expert panel members, along with other support information (e.g., quorum, target audience, staff liaison). #### Meetings Expert panels will meet a minimum of 4 times over the guideline-development process. Availability of expert panel members will determine the meeting schedule, and each meeting will include at least 50% of the expert panel members and should include at least 1 radiologist, 1 referring clinician and 1 patient advisor. If members are unable to attend either of the first 2 meetings, an individual meeting is offered to cover the material. #### Revise and restructure list of clinical and diagnostic scenarios After the initial meeting to introduce the project and discuss the mandate of the expert panel, members will revise and restructure the list of clinical and diagnostic scenarios, using the 2012 CAR list as a starting point for discussions. This may be done synchronously during a virtual meeting or individually offline, depending on member preference. The list is finalized once consensus is reached. #### Conduct rapid scoping review Producing guidelines can be time and resource intensive, particularly when recommendations are developed using evidence from systematic reviews and the GRADE framework. As the 2012 CAR Diagnostic Imaging Referral Guidelines included recommendations for 338 clinical and diagnostic scenarios, we will use a systematic rapid scoping review approach, with evidence-based guidelines as the unit of inclusion. A scoping review allows for mapping of the body of literature and can be conducted to summarize and disseminate research findings.13 Further, a rapid review is “a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting a variety of methods to produce evidence for stakeholders in a resource-efficient manner.”14 The Joanna Briggs Institute,13 with additional guidance on conducting rapid reviews,15 will guide the conduct of the systematic rapid scoping review for each of the 13 sections. We used the relevant items in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) statement16 as a guide to ensure reporting standards are met for the description of the systematic rapid scoping review methods reported herein. ##### Eligibility criteria The inclusion criteria are presented in Table 2. View this table: [Table 2:](http://www.cmajopen.ca/content/11/2/E248/T2) Table 2: Inclusion criteria ##### Information sources An experienced information specialist will develop a search strategy using the updated list of clinical and diagnostic scenarios produced by the expert panel. A senior epidemiologist will review this search strategy for completeness. The library scientist will execute the search in MEDLINE and Embase using controlled vocabulary (e.g., Medical Subject Headings) and title and abstract keywords. For feasibility and to capture the newest evidence base, we will limit the search to guidelines published in the last 5 years. We will perform supplemental searching to identify guidelines not captured in the electronic databases. For feasibility, we will search the ACR Appropriateness Criteria, the National Institute for Health and Care Excellence guidelines and relevant section-specific specialty societies (e.g., Society of Obstetricians and Gynaecologists of Canada). We will also include the recommendations found in the RCR iRefer, 8th edition (2017).5 ##### Study selection ###### Title and abstract screening Following published guidance,19 we will use an artificial intelligence (AI) active machine learning tool (called the re-rank tool) in DistillerSR,20 an online systematic review software, during title and abstract screening. Using a standardized form, 1 reviewer will screen the records in prioritized order, as determined by the active machine learning (i.e., ordered by likelihood of inclusion). Once the software has predicted that 95% of the included studies have been identified, we will implement a stop-screening approach (further described in Appendix 2, available at [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1)), a threshold that has performed well.21,22 The re-rank tool screen has 4 ways to display the screening progress and the number of predicted references included (Appendix 3, available at [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1)). ##### Full-text screening Using a standardized form in DistillerSR, 2 reviewers will conduct a pilot exercise on about 25–50 records against the eligibility criteria, as described in Table 2. The 2 reviewers will resolve any disagreements by consensus. After the pilot exercise, 1 reviewer will evaluate the remaining full texts. ##### Data extraction and recommendations mapping One reviewer will map the recommendations from each included guideline to the relevant clinical and diagnostic scenario in the updated CAR guideline section. Other data extraction items include the guideline group name(s), year of publication (or last update), method of evaluating the quality or certainty of the recommendation (e.g., Oxford Centre for Evidence-based Medicine, GRADE), recommendation grade, and the GRADE evidence profile or summary of findings tables, when available. ##### Critical appraisal One reviewer will critically appraise the included guidelines using the Appraisal of Guidelines for Research & Evaluation II (AGREE-II) checklist (updated in December 2017),17,18 using a modified scale (Appendix 4, available at [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1)). Briefly, the AGREE-II tool uses a scale from 1 to 7 for each question, which we have modified to 3 options: Agree, Partially agree and Disagree. The expert panel will consider the quality of the guideline during the discussions and formulation of the recommendations. #### Expert panel member review Once the scoping review is completed, the CAR epidemiologists will share the results with the expert panel members for independent review over a 4-week period. In addition to the complete evidence-mapping tables, we will provide a synopsis of the information across guidelines for each clinical and diagnostic scenario. These synopses are useful during recommendation formulation, as concordance and discordance among the recommendations are highlighted. #### Development of recommendations The expert panel members will meet to formulate the recommendations for each clinical and diagnostic scenario in the section. Using a modified GRADE for guidelines approach, in addition to the recommendations from the included guidelines, the expert panel discussions will consider the following contextualization factors when formulating the recommendations: the certainty of the evidence (where available); the balance of benefits and harms; patient values and preferences; equity, acceptability and feasibility; and resource use and cost.9,10 Although there are limitations to this approach, for feasibility, we will extract the judgments around the certainty of the evidence (e.g., very low, low, moderate, high) as presented in the guidelines. Using GRADE as guidance,10 expert panel members will assign the strength (i.e., strong, conditional) and direction (i.e., for, against) of the recommendation using consistent phrasing and graphical representation for the recommendations (Figure 2). For clinical and diagnostic scenarios that do not have any included guidelines, the expert panel members will formulate the recommendations through discussion and consensus considering their clinical expertise, patient values and preferences, equity, accessibility, resources and costs. In these instances, older guidelines may be used as part of the discussion and are considered a part of the clinical expertise, but as they were not identified using the systematic search approach, they are not considered an “included” guideline. ![Figure 2:](http://www.cmajopen.ca/https://www.cmajopen.ca/content/cmajo/11/2/E248/F2.medium.gif) [Figure 2:](http://www.cmajopen.ca/content/11/2/E248/F2) Figure 2: Determining the strength of the recommendation. Created using the guidance provided in Andrews and colleagues.9 #### Draft guideline A senior epidemiologist at the CAR, who is also the guideline methodologist, will draft the guideline. A draft table of contents (Appendix 5, available at [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1)) includes a brief methods section, which will contain a link to this protocol to provide additional details. #### Peer review Once the expert panels finalize the guideline, working group members will provide peer review around the contextualization and clarity of the recommendations. Once working group feedback is incorporated into the guideline, expert panel members will nominate additional external stakeholders (e.g., referring clinicians, patients) to approach for external peer review, who will be invited via email. The goal of the external feedback is not for endorsement, but to ensure that the guidelines and recommendations are clearly written. ### Analysis Expert panel members will use recommendations from existing guidelines to inform discussions during recommendation formulation. ### Ethics approval No ethics approval was required for this work. ## Interpretation Using a transparent and structured approach will help in developing reproducible guidelines across the 13 CAR sections. Other organizations producing diagnostic imaging guidelines have also published their processes.4,23,24 The CAR website will host the publicly available guidelines, per section, as they are produced. This will allow free access to referring clinicians, radiologists, patients and families, and other producers of diagnostic imaging guidelines. These recommendations are being written to optimize integration into clinical decision support systems of both community medical facilities and hospitals that have the required infrastructure. For dissemination to offline users, the CAR will produce a digital and paper book, once all sections are complete. We will seek additional funding to work with patient groups to develop patient-friendly summaries, a valuable tool implemented by several organizations, including Cochrane25 and the ACR.26 For feasibility, we will prioritize with patient groups which scenarios require patient-friendly summaries. Discussions around knowledge dissemination are currently underway, and may include peer-reviewed publications, newsletters and communications from the partnering organizations of the members of the working group (e.g., The College of Family Physicians of Canada and Nurse Practitioner Association of Canada). Our expected timeline to complete the 13 sections is December 2023. ### Limitations There are some limitations to our approach. First, having guidelines as the unit of inclusion in our evidence review does not allow for the evaluation of the 5 GRADE domains when conducting a systematic review of primary studies (i.e., risk of bias, imprecision, indirectness, inconsistency and publication bias).27 Therefore, we must rely on the level of evidence as reported by the guideline group. To ensure we have some level of certainty or quality of the recommendations in the guidelines, we will include only guidelines that have used a systematic approach to identify the primary studies and that have performed critical appraisal on these studies. Second, the outcomes judged as critical for decision-making for the guideline group may not be the same as the outcomes that would have been voted as critical for the CAR expert panels. 28 However, this limitation is specific to guidelines that rate patient-important outcomes before the conduct of the systematic review, which is not always performed depending on the guideline methodology used. Third, we will use AI to help with title and abstract screening, and there is a risk a relevant guideline will be missed. To mitigate this risk, we will implement several checks (e.g., using the AI audit tool in the software, verification of 30% of the records, and allowing expert panel members to nominate guidelines for evaluating against the inclusion criteria). Fourth, as we will use this process for 13 expert panels, we may be required to modify the process. This may be influenced by the availability of expert panel members, by the number of clinical and diagnostic scenarios covered, and by timelines. We aim to adhere to these methods across sections and will report any large deviations from the process in the guidelines. ### Conclusion A set of up-to-date, Canadian-specific, diagnostic imaging referral guidelines are needed for safe, high-value diagnostic imaging referrals and improved patient care in Canadian health care systems. We have described the guideline development process that the CAR will apply across the 13 sections. ## Footnotes * **Competing interests:** Ryan Margau and Paul Pageau are co-chairs of the Canadian Association of Radiologists Diagnostic Imaging Referrals Guidelines Working Group. Erin Sarrazin is the membership director of the Nurse Practitioner Association of Canada. Charlotte Yong-Hing is the president of the BC Radiological Society and the president elect of the Canadian Society of Breast Imaging. Kaitlin Zaki-Metias is the trainee representative on the Canadian Society of Breast Imaging Board of Directors and serves on the RadioGraphics Trainee Editorial Advisory Members Board. No other competing interests were declared. * This article has been peer reviewed. * **Contributors:** All authors contributed to the concept and design of the guideline development process described. Candyce Hamel drafted the manuscript, and all other authors critically revised the draft version. All authors gave final approval of the version to be published and agree to act as guarantors of the work. * **Funding:** This work has been funded by the Canadian Medical Association. The funder did not have any role in the content, in the writing of this manuscript or in the decision to submit for publication. * **Data sharing:** As the authors are conducting a rapid scoping review and extracting recommendations from existing guidelines, they will not have any data. The guidelines included in the rapid scoping review will be available in tabular form within the Canadian Association of Radiologists (CAR) guidelines and will be made freely available on the CAR website. * **Supplemental information:** For reviewer comments and the original submission of this manuscript, please see [www.cmajopen.ca/content/11/2/E248/suppl/DC1](http://www.cmajopen.ca/content/11/2/E248/suppl/DC1). This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made. See: [https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/) ## References 1. Squires JE, Cho-Young D, Aloisio LD, et al. (2022) Inappropriate use of clinical practices in Canada: a systematic review. CMAJ 194:E279–96. [Abstract/FREE Full Text](http://www.cmajopen.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czoxMDoiMTk0LzgvRTI3OSI7czo0OiJhdG9tIjtzOjIxOiIvY21ham8vMTEvMi9FMjQ4LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 2. O’Sullivan JW, Albasri A, Nicholson BD, et al. (2018) Overtesting and undertesting in primary care: a systematic review and meta-analysis. BMJ Open 8:e018557. [Abstract/FREE Full Text](http://www.cmajopen.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiYm1qb3BlbiI7czo1OiJyZXNpZCI7czoxMToiOC8yL2UwMTg1NTciO3M6NDoiYXRvbSI7czoyMToiL2NtYWpvLzExLzIvRTI0OC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 3. Cao DJ, Hurrell C, Patlas MN (Aug 6, 2022) Current status of burnout in Canadian radiology. Can Assoc Radiol J, 8465371221117282, doi:10.1177/08465371221117282, [Epub ahead of print]. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1177/08465371221117282&link_type=DOI) 4. European Society of Radiology (ESR) (2019) Methodology for ESR iGuide content. Insights Imaging 10:32. 5. (2017) RCR iRefer guidelines: making the best use of clinical radiology (The Royal College of Radiologists, London (UK)). 6. Zhang Y, Akl EA, Schünemann HJ (July 14, 2018) Using systematic reviews in guideline development: the GRADE approach. Res Synth Methods doi:10.1002/jrsm.1313, [Epub ahead of print]. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1002/jrsm.1313&link_type=DOI) 7. (2012) 2012 CAR diagnostic imaging referral guidelines (Canadian Association of Radiologists, Ottawa) Available: [https://car.ca/patient-care/referral-guidelines/](https://car.ca/patient-care/referral-guidelines/). accessed 2022 Mar. 31. 8. Hunt DL, Haynes RB, Hanna SE, et al. (1998) Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 280:1339–46. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1001/jama.280.15.1339&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=9794315&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) [Web of Science](http://www.cmajopen.ca/lookup/external-ref?access_num=000076472500028&link_type=ISI) 9. Andrews J, Guyatt G, Oxman AD, et al. (2013) GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations. J Clin Epidemiol 66:719–25. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2012.03.013&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=23312392&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 10. Andrews JC, Schünemann HJ, Oxman AD, et al. (2013) GRADE guidelines: 15. Going from evidence to recommendation-determinants of a recommendation’s direction and strength. J Clin Epidemiol 66:726–35. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2013.02.003&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=23570745&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 11. Guyatt G, Oxman AD, Akl EA, et al. (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64:383–94. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2010.04.026&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=21195583&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 12. Schünemann HJ, Wiercioch W, Etxeandia I, et al. (2014) Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ 186:E123–42. [Abstract/FREE Full Text](http://www.cmajopen.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czoxMDoiMTg2LzMvRTEyMyI7czo0OiJhdG9tIjtzOjIxOiIvY21ham8vMTEvMi9FMjQ4LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 13. 1. Aromataris E, 2. Munn Z Peters M, Godfrey C, McInerney P, et al. (2020) in JBI manual for evidence synthesis, Chapter 11: Scoping reviews, eds Aromataris E, Munn Z (The Joanna Briggs Institute, Adelaide (AU)) Available: [https://doi.org/10.46658/JBIMES-20-12](https://doi.org/10.46658/JBIMES-20-12). accessed 2021 Feb. 24. 14. Hamel C, Michaud A, Thuku M, et al. (2021) Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. J Clin Epidemiol 129:74–85. [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 15. Garritty C, Gartlehner G, Nussbaumer-Streit B, et al. (2021) Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol 130:13–22. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2020.10.007&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=33068715&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 16. Moher D, Shamseer L, Clarke M, et al. (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4:1. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1186/2046-4053-4-1&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=25554246&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 17. Brouwers MC, Kho ME, Browman GP, et al. (2010) AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ 182:E839–42. [FREE Full Text](http://www.cmajopen.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czoxMToiMTgyLzE4L0U4MzkiO3M6NDoiYXRvbSI7czoyMToiL2NtYWpvLzExLzIvRTI0OC5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 18. (2017) AGREE Next Steps Consortium, Appraisal of guidelines for research & evaluation II: the AGREE II Instrument [electronic version]. Available: [https://www.agreetrust.org/wp-content/uploads/2017/12/AGREE-II-Users-Manual-and-23-item-Instrument-2009-Update-2017.pdf](https://www.agreetrust.org/wp-content/uploads/2017/12/AGREE-II-Users-Manual-and-23-item-Instrument-2009-Update-2017.pdf). accessed 2022 Mar. 3. 19. Hamel C, Hersi M, Kelly SE, et al. (2021) Guidance for using artificial intelligence for title and abstract screening while conducting knowledge syntheses. BMC Med Res Methodol 21:285. [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 20. (2011) Distiller SR (Evidence Partners, Ottawa) Available: [https://v2dis-prod.evidencepartners.com/](https://v2dis-prod.evidencepartners.com/)Login required to access content. 21. Hamel C, Kelly SE, Thavorn K, et al. (2020) An evaluation of DistillerSR’s machine learning-based prioritization tool for title/abstract screening — impact on reviewer-relevant outcomes. BMC Med Res Methodol 20:256. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1186/s12874-020-01129-1&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=33059590&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) 22. Howard BE, Phillips J, Tandon A, et al. (2020) SWIFT-Active Screener: accelerated document screening through active learning and integrated recall estimation. Environ Int 138:105623. 23. Cascade PN, The American College of Radiology (2000) ACR Appropriateness Criteria project. Radiology 214(Suppl):3–46. [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=10646480&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) [Web of Science](http://www.cmajopen.ca/lookup/external-ref?access_num=000084472500001&link_type=ISI) 24. (2014) Developing NICE guidelines: the manual [PMG20] (National Institute for Health and Care Excellence, London (UK)) Available[https://www.nice.org.uk/process/pmg20/chapter/introduction](https://www.nice.org.uk/process/pmg20/chapter/introduction). accessed 2022 Mar. 7. 25. New standards for plain language summaries (Cochrane Consumer Network, London (UK)) Available: [https://consumers.cochrane.org/PLEACS](https://consumers.cochrane.org/PLEACS). accessed 2021 Oct. 14. 26. Journal of the American College of Radiology Author instructions for imaging appropriateness criteria (AC) patient-friendly summaries, Available[https://www.jacr.org/content/ac-patient-summary-submission](https://www.jacr.org/content/ac-patient-summary-submission). accessed 2023 Feb. 13. 27. Balshem H, Helfand M, Schünemann HJ, et al. (2011) GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol 64:401–6. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2010.07.015&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=21208779&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) [Web of Science](http://www.cmajopen.ca/lookup/external-ref?access_num=000288364300009&link_type=ISI) 28. Guyatt GH, Oxman AD, Kunz R, et al. (2011) GRADE guidelines: 2. Framing the question and deciding on important outcomes. J Clin Epidemiol 64:395–400. [CrossRef](http://www.cmajopen.ca/lookup/external-ref?access_num=10.1016/j.jclinepi.2010.09.012&link_type=DOI) [PubMed](http://www.cmajopen.ca/lookup/external-ref?access_num=21194891&link_type=MED&atom=%2Fcmajo%2F11%2F2%2FE248.atom) * © 2023 CMA Impact Inc. or its licensors