Skip to main content

Main menu

  • Home
  • Content
    • Current issue
    • Past issues
    • Collections
  • Authors & Reviewers
    • Overview for Authors
    • Preparing manuscripts
    • Submission Checklist
    • Publication Fees
    • Forms
    • Editorial Policies
    • Editorial Process
    • Patient-Oriented Research
    • Manuscript Progress
    • Submitting a letter
    • Information for Reviewers
    • Open access
  • Alerts
    • Email alerts
    • RSS
  • About
    • General information
    • Staff
    • Editorial board
    • Contact
  • CMAJ JOURNALS
    • CMAJ
    • CJS
    • JAMC
    • JPN

User menu

Search

  • Advanced search
CMAJ Open
  • CMAJ JOURNALS
    • CMAJ
    • CJS
    • JAMC
    • JPN
CMAJ Open

Advanced Search

  • Home
  • Content
    • Current issue
    • Past issues
    • Collections
  • Authors & Reviewers
    • Overview for Authors
    • Preparing manuscripts
    • Submission Checklist
    • Publication Fees
    • Forms
    • Editorial Policies
    • Editorial Process
    • Patient-Oriented Research
    • Manuscript Progress
    • Submitting a letter
    • Information for Reviewers
    • Open access
  • Alerts
    • Email alerts
    • RSS
  • About
    • General information
    • Staff
    • Editorial board
    • Contact
  • Subscribe to our alerts
  • RSS feeds
  • Follow CMAJ Open on Twitter
Research

Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study

Melissa D. McCradden, Ami Baba, Ashirbani Saha, Sidra Ahmad, Kanwar Boparai, Pantea Fadaiefard and Michael D. Cusimano
February 18, 2020 8 (1) E90-E95; DOI: https://doi.org/10.9778/cmajo.20190151
Melissa D. McCradden
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ami Baba
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ashirbani Saha
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sidra Ahmad
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kanwar Boparai
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pantea Fadaiefard
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael D. Cusimano
Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael’s Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Tables
  • Related Content
  • Responses
  • Metrics
  • PDF
Loading

Abstract

Background: As artificial intelligence (AI) approaches in research increase and AI becomes more integrated into medicine, there is a need to understand perspectives from members of the Canadian public and medical community. The aim of this project was to investigate current perspectives on ethical issues surrounding AI in health care.

Methods: In this qualitative study, adult patients with meningioma and their caregivers were recruited consecutively (August 2018–February 2019) from a neurosurgical clinic in Toronto. Health care providers caring for these patients were recruited through snowball sampling. Based on a nonsystematic literature search, we constructed 3 vignettes that sought participants’ views on hypothetical issues surrounding potential AI applications in health care. The vignettes were presented to participants in interviews, which lasted 15–45 minutes. Responses were transcribed and coded for concepts, frequency of response types and larger concepts emerging from the interview.

Results: We interviewed 30 participants: 18 patients, 7 caregivers and 5 health care providers. For each question, a variable number of responses were recorded. The majority of participants endorsed nonconsented use of health data but advocated for disclosure and transparency. Few patients and caregivers felt that allocation of health resources should be done via computerized output, and a majority stated that it was inappropriate to delegate such decisions to a computer. Almost all participants felt that selling health data should be prohibited, and a minority stated that less privacy is acceptable for the goal of improving health. Certain caveats were identified, including the desire for deidentification of data and use within trusted institutions.

Interpretation: In this preliminary study, patients and caregivers reported a mixture of hopefulness and concern around the use of AI in health care research, whereas providers were generally more skeptical. These findings provide a point of departure for institutions adopting health AI solutions to consider the ethical implications of this work by understanding stakeholders’ perspectives.

Artificial intelligence (AI) holds immense promise for health care.1–3 The field is evolving rapidly owing to increased computing capacity, availability of data and the widespread adoption of electronic health records in hospitals. However, current trends in big data use have brought about ethical concerns regarding accountability, responsibility and trust, among others. The views of the public are essential to supporting institutions’ approaches to adopting AI as well as guiding important education initiatives that may be crucial to maintaining the public’s support and trust. However, the speed of progress and potential for benefit of the technology are are mired by ethical controversies surrounding the use of AI more broadly that may undermine public trust in this technology.4

Public perceptions regarding health data use for research are well characterized,5–13 but limited work specific to public perceptions of AI has been noted.14–16 Consistently across the globe, members of the public value the benefit to be gained from medical research but are concerned about the privacy of personal health data. Even before the era of big data, people described concerns about the use of health data for research, particularly as the data are made available to more individuals and groups. Paprica and colleagues12 recently conducted a focus group on the use of health data with Canadian stakeholders and identified a strong suspicion of private industry, which had been noted by other investigators.7,10,11 Similarly, Kim and colleagues13 discovered that it is important to patients whom their health information and biospecimens are shared with for research purposes; particular hesitance was observed in sharing data with for-profit institutions.

In the present study, we sought to expand the scope of inquiry provided by this prior work by using vignettes to elicit perspectives on a nonexhaustive set of ethical concepts that are central to AI applications in health care. We conducted qualitative interviews with patients, caregivers and health care providers to investigate their perspectives on ethical considerations of AI-enabled research.

Methods

Setting and design

Patients and caregivers were recruited consecutively (August 2018–February 2019) at St. Michael’s Hospital, Toronto, in the senior author’s (M.D.C.) neurosurgical clinic as part of a larger study focused on quality of life among patients with meningioma.17 Patient eligibility criteria were diagnosis of meningioma, recent (2008–2018) neurosurgical or neuro-oncologic intervention and capacity to consent to research. Caregivers included spouses, adult children, relatives and friends who had accompanied the patient to at least 1 clinic appointment. Exclusion criteria were age less than 18 years and not fluent in English (given the complexity of the interview content). Health care providers caring for these patients were recruited through snowball sampling. No prior relationship existed between the participants and the interviewers, with the exception of 2 health care providers who had collaborative relationships with the primary investigator (M.D.C.). Participants were aware that the interviewers were part of a research group conducting AI work, which contributed toward the rationale of the present study.

After introductions by a member of the patient’s circle of care, interviews were conducted in a private clinic room by a postdoctoral fellow (M.D.M.) or a research assistant (A.B.) (both female). The patient’s caregiver was present in 2 cases. Interviews included collection of baseline demographic information and presentation of 3 vignettes (described below), and lasted about 15–45 minutes. All participants provided written informed consent. Interviews were audio-recorded and transcribed verbatim with consent; in cases in which consent was denied, the interviewer took detailed notes. No repeat interviews were conducted with any of the participants, and they did not see their interview transcripts.

Development of vignettes

To guide vignette development, we searched academic databases (PubMed, Medline, JSTOR and PsycINFO) and performed a Google search to identify a set of ethical principles prioritized consistently for health AI (Appendix 1, available at www.cmajopen.ca/content/8/1/E90/suppl/DC1). The final set included informed consent, privacy, confidentiality, responsibility, accountability, unintended consequences or harms, trust and public engagement (Table 1). We assessed perspectives around these ethical principles through 3 scenarios: data-driven approaches to health care research, use of machine learning in clinics and commercialization of data (Appendix 1). Scenarios were trialled for comprehension and relevance to the intended ethical concept through 3 rounds of feedback with the research team and health care colleagues.

View this table:
  • View inline
  • View popup
Table 1:

Ethical principles prioritized consistently for health artificial intelligence

Participants were told that these scenarios were examples of realistic but hypothetical AI-enabled research. They were asked about their current knowledge of AI before the vignettes were described. After each scenario, participants were asked for their opinions and how they thought characters in the vignette would react or feel. Interviewers refrained from providing additional information beyond the details identified in the interview script (Appendix 1).

Data analysis

Directed content analysis (M.D.M., A.B., P.F.) allowed data interpretation under the umbrella of our predefined ethical concepts.18 M.D.M. has formal education in empirical bioethics, including qualitative methodology, and has published qualitative work previously. A.B. has been trained in qualitative methodology. Open-ended questions were codified based on prevailing reasoning for the answer(s) given to a particular question. Closed-ended responses were categorized as yes (fully positive), no (fully negative), unsure (in-between positive and negative) or unknown, and justifications and reasons were noted.

Previous work indicated that thematic saturation is reached with 12 interviews.19 The emerging themes were presented by the 2 interviewers for discussion with the primary investigator (M.D.C., who has done qualitative work in the past) and team members, who together decided when an acceptable level of saturation had been reached.

No coding software was used, and the data were managed with Microsoft Excel 2016.

Ethics approval

Ethics approval was granted by the Unity Health Toronto Research Ethics Board.

Results

Of the 19 patients invited, 1 declined participation. All 7 caregivers and 5 health care providers invited agreed to participate. The participants’ demographic characteristics are shown in Table 2. None had formal experience with AI systems or methodologies.

View this table:
  • View inline
  • View popup
Table 2:

Participant demographic characteristics

Providers’ responses with highly consistent with each other. Patients and caregivers expressed divergent opinions on many issues and offered a range of different views. Overall, responses reflected a sense of uncertainty about what the “right” course of action should be in many circumstances (Table 3). Representative participant quotes concerning the ethical concepts are presented in Table 4.

View this table:
  • View inline
  • View popup
Table 3:

Key participant perceptions regarding the use of artificial intelligence in health care research

View this table:
  • View inline
  • View popup
Table 4:

Illustrative participant quotes regarding ethical issues

Conditions of use of data for health care research

There was nearly unanimous agreement that health data are a valuable resource that can be directed for the purpose of improving health and disease treatment through research, but disagreement as to the threshold for requiring consent for their use. Many of those who advocated for consent initially felt that, in an urgent, disastrous situation (e.g., disease outbreak), the circumstances would be sufficiently compelling to warrant an “accelerated process” (participant 18–008, provider) or complete bypassing of consent. Many advocated for disclosure of health data use nonetheless, through social media, telephone calls, text messages or other media.

Most participants cited deidentification as a satisfactory condition for nonconsented use of health data for research. When asked about what deidentification meant, respondents agreed with removing any or all of name, social insurance number, date of birth, address or health care number, as prompted by the interviewer. These perspectives were connected explicitly with the use of data by researchers in health care for the purpose of improving medical care.

Deference to computer outputs

A minority of respondents readily accepted the idea that an output from a “computer” should allocate patients to treatment or no treatment based on a prediction from a computer regarding their probability of benefiting. The lone provider who agreed with this idea likened this to the obligation to not offer treatments that are unlikely to benefit a patient. Those who resisted this notion appealed to fairness or equality (“trying is more important” [participant 18–008, provider]), fair opportunity (“everyone deserves the chance to be treated” [participant 18–017, provider]), evidential uncertainty (“should do more research” [participant 18–015, caregiver]) and individual factors influencing prognosis. All but 1 provider rejected the notion of allocation of treatment by AI, appealing to the need for these decisions to be made collaboratively with patients.

Concerning the vignette that revealed there had been a mistake in the computer system, many participants declared that they had expected such a mistake, and nearly all were accepting of the notion that mistakes happen. Participants almost universally supported disclosure (although 1 patient disagreed, fearing repercussions to the algorithm developers) and reparations, including lawsuits (“they should suck it up and pay” [participant 18–007, caregiver]) and efforts to financially compensate and medically treat patients who were excluded from treatment. Some were less forgiving (“fire them and hire new researchers” [participant 18–049, patient]). When asked who was responsible for the mistake, most participants pointed to those who developed the algorithm, with a few specifically blaming the people who input the data into the computer. One participant said that the person most in charge was responsible for the outcome. One provider described the need to publish and report the negative results so that others would not repeat the mistake.

Secondary use or sale of data

Most participants felt strongly that selling health data to private companies should be prohibited entirely. The few who disagreed argued that loss of privacy is an acceptable sacrifice for the prospect of benefit to the larger population, indicating that, as long as the product being developed would help people and adverse effects were minimal, selling health data was justified. Others described the difficulty in not knowing what kind of product would be developed; 1 participant noted “every company thinks [it’s] honourable, but it depends on your perspective” [participant 18–002, patient]). Overwhelmingly and regardless of their view, participants advocated for transparency about how health data would be used, communicated openly by a trusted institution or custodian of health information.

No health care providers felt that selling either identifiable or deidentified data was appropriate. They perceived that selling data conflicted with the responsibilities of health data custodians. One provider described patients as a “vulnerable population,” as patients are eager to support any endeavour purported to help others with the same disease, even if they know they themselves will not benefit directly (participant 18–010, provider). The idea that research might be able to “find a cure” was echoed repeatedly in this context by patients and caregivers, seemingly supporting providers’ views.

Trust and public engagement in research

Patients and caregivers reported a high level of trust in health care institutions with regard to ethical practices and acting responsibly vis-à-vis health data by following regulations designed to protect the public. When asked about a duty to participate in research specifically through allowing use of their health data, a few participants stated that people had a duty to allow such use for the specific purpose of researching health-related problems, whereas others indicated no one had such a duty. Nearly all participants who did not express a yes or no answer indicated that they personally felt a sense of duty to contribute their data to research but that not everyone would agree, and individuals’ wishes should be respected. Others described a duty only if the research involved deidentified data and no potential harms to participants.

Several participants described a morally significant difference between data obtained from social media versus health data. All providers stated that health data were special, whereas most patients and caregivers indicated that, in modern society, people are now aware of the consequences of smartphone use, resulting in the minimization of privacy concerns. Despite a perception that data sharing is now inevitable, most participants clearly indicated discomfort with the lack of transparency regarding how their data were being used.

Interpretation

Our participants generally did not have substantively different perceptions regarding the use of AI in health research compared to previously explored notions of health data use and research.5–13 Exploring machines as “decision-makers,” however, elicited a range of opinions, with some participants being at ease with allowing such decisions to be delegated to machines and a majority expressing skepticism.

Our participants endorsed the notion that the broad use of health data as a resource to improve health11,12,20 poses risks to personal privacy, in keeping with a previous report.5 Patients derived a sense of altruism in providing their data, which contrasted with the feeling of powerlessness in having a brain tumour; this notion corresponded with 1 provider’s statements that patients may constitute a vulnerable group. Vulnerable groups, however, are not uniform;21–23 although our patient participants strongly supported research, vulnerability from a racialization or socioeconomic perspective often connotes distrust.9 The willingness to engage in health data research that drives AI is likely modified by the disproportionate risks inherent to AI that are carried by various marginalized populations.24–26

McDougall27 speculated that AI may disrupt patient autonomy. We found limited endorsement of deference to computerized outputs in our sample. Xu and colleagues noted that some people would blindly trust a robot to guide them through a rehabilitation protocol.14 However, those authors used an interactive robot that approximated a human interaction, whereas we described the guidance coming from a “computer” (i.e., nonhumanoid). Most (4/5) of our provider participants indicated that treatment decisions require conversations with patients and families. Even among tech-savvy youth seeking treatment for highly stigmatized conditions in a qualitative study, there remained a strong preference for interacting with health care providers to discuss health issues.16

A particular challenge for health AI involves the often-needed collaboration with private industry to develop solutions. Like Paprica and colleagues,12 we found a generally more negative or mixed reaction to sharing health data with private companies. Our participants sharply contrasted the use of data to improve health with prioritizing profit-making.7,10–12 Patients’ trust in health care institutions compels providers to retain a strong understanding of the social licence12 surrounding health data use as AI is integrated into health care.

Future studies may extend these findings by soliciting views from AI-knowledgeable people. In addition, although our sample was not homogenous, it was selective in that it included people with high levels of health care interactions. It will no doubt be important to capture the views of a more diverse group of people with varying levels of health care interactions.

Limitations

Our study’s findings may have limited generalizability given the population (patients with meningioma and their caregivers and health care providers) and participants’ extensive involvement with the health care industry. The health care providers interviewed were selected by convenience sampling and so may not be representative of clinicians generally. However, the responses they provided were consistent with prior work, which suggests that many central concepts such as appropriate use of health data and consent may be inherent to clinicians’ professional duties and less likely due to sampling bias.

It is possible that more detailed views may have been missed because we left the scenarios somewhat vague. This was consistent with the study aim, which was to provide an initial glimpse into views on health AI. We also intended to avoid getting overly complex about details that are not yet formalized with regard to AI’s broader adoption in health care. To be highly specific at this juncture would be premature.

Conclusion

We highlight initial perspectives surrounding the use of AI in health care research among AI-naive patients, caregivers and health care providers at a large urban hospital. We found a mixture of hope and skepticism regarding the use of AI. The findings reflect previous work citing tensions between privacy and potential benefit. Although there was broad support for the use of AI in health research, this study identified certain caveats, including the desire for deidentification of data and use within trusted institutions, with the goal of contributing toward the improvement of health.

Footnotes

  • Competing interests: Melissa McCradden and Ami Baba received salary support from a Cancer Care Ontario grant. Ashirbani Saha reports grants from the Canadian Institute for Military and Veteran Health Research and Ryerson University, and patient donations from the Brain Matters Foundation, outside the submitted work. Michael Cusimano reports a grant from Cancer Care Ontario during the conduct of the study and grants from the Canadian Institute for Military and Veteran Health Research (CIMVHR Advanced Analytics Initiative) outside the submitted work. No other competing interests were declared.

  • This article has been peer reviewed.

  • Contributors: Melissa McCradden conceived the study. Melissa McCradden, Ashirbani Saha and Sidra Ahmad designed the study. Melissa McCradden and Ami Baba collected the data. Melissa McCradden, Ami Baba, Ashirbani Saha, Kanwar Boparai and Pantea Fadaiefard analyzed and interpreted the data. Melissa McCradden and Sidra Ahmad drafted the manuscript, and Melissa McCradden, Ami Baba, Ashirbani Saha and Michael Cusimano revised it critically for important intellectual content. All of the authors approved the final version to be published and agreed to be accountable for all aspects of the work.

  • Funding: This study was conducted as part of a project funded by Cancer Care Ontario through the Government of Ontario. Sidra Ahmad was supported by a Health Grand Challenge scholarship from the Princeton University Center for Health and Wellbeing.

  • Disclaimer: The funders had no role in the study design, data collection and analysis, decision to publish or preparation of this manuscript.

  • Supplemental information: For reviewer comments and the original submission of this manuscript, please see www.cmajopen.ca/content/8/1/E90/suppl/DC1.

References

  1. ↵
    1. Hinton G
    (2018) Deep learning — a technology with the potential to transform health care. JAMA 320:1101–2.
    OpenUrlCrossRefPubMed
    1. Israni ST,
    2. Verghese A
    (2019) Humanizing artificial intelligence. JAMA 321:29–30.
    OpenUrlCrossRef
  2. ↵
    1. Naylor CD
    (2018) On the prospects for a (deep) learning health care system. JAMA 320:1099–100.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Gibney E
    (2020) The battle for ethical AI at the world’s biggest machine-learning conference. Nature 577:609.
    OpenUrl
  4. ↵
    1. Robling MR,
    2. Hood K,
    3. Houston H,
    4. et al.
    (2004) Public attitudes towards the use of primary care patient record data in medical research without consent: a qualitative study. J Med Ethics 30:104–9.
    OpenUrlAbstract/FREE Full Text
    1. Haddow G,
    2. Bruce A,
    3. Sathanandam S,
    4. et al.
    (2011) “Nothing is really safe”: a focus group study on the processes of anonymizing and sharing of health data for research purposes” J Eval Clin Pract 17:1140–6.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Lehnbom EC,
    2. Brien JE,
    3. McLachlan AJ
    (2014) Knowledge and attitudes regarding the personally controlled electronic health record: an Australian national survey. Intern Med J 44:406–9.
    OpenUrlCrossRefPubMed
    1. Grande D,
    2. Mitra N,
    3. Shah A,
    4. et al.
    (2014) The importance of purpose: moving beyond consent in the societal use of personal health information. Ann Intern Med 161:855–62.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bansal G,
    2. Zahedi F,
    3. Gefen D
    (2010) The impact of personal dispositions on privacy and trust in disclosing health information online. Decis Support Syst 49:138–50.
    OpenUrlCrossRef
  7. ↵
    1. Willison DJ,
    2. Swinton M,
    3. Schwartz L,
    4. et al.
    (2008) Alternatives to project-specific consent for access to personal information for health research: insights from a public dialogue. BMC Med Ethics 9:18.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Gaylin DS,
    2. Moiduddin A,
    3. Mohamoud S,
    4. et al.
    (2011) Public attitudes about health information technology, and its relationship to health care quality, costs, and privacy. Health Serv Res 46:920–38.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Paprica PA,
    2. de Melo MN,
    3. Schull MJ
    (2019) Social licence and the general public’s attitudes toward research based on linked administrative health data: a qualitative study. CMAJ Open 7:E40–6.
    OpenUrl
  10. ↵
    1. Kim J,
    2. Kim H,
    3. Bell E,
    4. et al.
    (2019) Patient perspectives about decisions to share medical data and biospecimens for research. JAMA Netw Open 2:e199550.
    OpenUrl
  11. ↵
    1. Xu J,
    2. Bryant DG,
    3. Howard A
    2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2018 Aug 27–31, Jiangsu Conference Center, Nanjing, China), Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios, pp 442–7, Available: https://ieeexplore.ieee.org/abstract/document/8525782/. accessed 2019 Aug 19.
    1. Hengstler M,
    2. Enkel E,
    3. Duelli S
    (2016) Applied artificial intelligence and trust — the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change 105:105–20.
    OpenUrl
  12. ↵
    1. Aicken CRH,
    2. Fuller SS,
    3. Sutcliffe LJ,
    4. et al.
    (2016) Young people’s perceptions of smartphone-enabled self-testing and online care for sexually transmitted infections: qualitative interview study. BMC Public Health 16:974.
    OpenUrlPubMed
  13. ↵
    1. Baba A,
    2. McCradden MD,
    3. Rabski J,
    4. et al.
    (Oct, 2019) Determining the unmet needs of patients with intracranial meningioma — a qualitative assessment. Neurooncol Pract, 29, doi:10.1093/nop/npz054.
    OpenUrlCrossRef
  14. ↵
    1. Hsieh HF,
    2. Shannon SE
    (2005) Three approaches to qualitative content analysis. Qual Health Res 15:1277–88.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Guest G,
    2. Bunce A,
    3. Johnson L
    (2006) How many interviews are enough? An experiment with data saturation and variability. Field Methods 18:59–82.
    OpenUrlCrossRef
  16. ↵
    1. Hastings TM
    (2004) Family perspectives on integrated child health information systems. J Public Health Manag Pract 10(Suppl):S24–9.
    OpenUrl
  17. ↵
    1. Levine C,
    2. Faden R,
    3. Grady C,
    4. et al.,
    5. Consortium to Examine Clinical Research Ethics
    (2004) The limitations of “vulnerability” as a protection for human research participants. Am J Bioeth 4:44–9.
    OpenUrl
    1. Macklin R
    (2003) Bioethics, vulnerability, and protection. Bioethics 17:472–86.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Rogers W,
    2. MacKenzie C,
    3. Dodds S
    (2012) Why bioethics needs a concept of vulnerability. Int J Fem Approaches Bioeth 5:11–38.
    OpenUrl
  19. ↵
    1. Char DS,
    2. Shah NH,
    3. Magnus D
    (2018) Implementing machine learning in health care — addressing ethical challenges. N Engl J Med 378:981–3.
    OpenUrlCrossRefPubMed
    1. Obermeyer Z,
    2. Powers B,
    3. Vogeli C,
    4. et al.
    (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366:447–53.
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Schulz A,
    2. Caldwell C,
    3. Foster S
    (2003) “What are they going to do with the information?” Latino/Latina and African American perspectives on the Human Genome Project” Health Educ Behav 30:151–69.
    OpenUrlCrossRefPubMed
  21. ↵
    1. McDougall RJ
    (2019) Computer knows best? The need for value-flexibility in medical AI. J Med Ethics 45:156–60.
    OpenUrlAbstract/FREE Full Text
  • Copyright 2020, Joule Inc. or its licensors
PreviousNext
Back to top

In this issue

CMAJ Open: 8 (1)
Vol. 8, Issue 1
1 Jan 2020
  • Table of Contents
  • Index by author

Article tools

Respond to this article
Print
Download PDF
Article Alerts
To sign up for email alerts or to access your current email alerts, enter your email address below:
Email Article

Thank you for your interest in spreading the word on CMAJ Open.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study
(Your Name) has sent you a message from CMAJ Open
(Your Name) thought you would like to see the CMAJ Open web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study
Melissa D. McCradden, Ami Baba, Ashirbani Saha, Sidra Ahmad, Kanwar Boparai, Pantea Fadaiefard, Michael D. Cusimano
Jan 2020, 8 (1) E90-E95; DOI: 10.9778/cmajo.20190151

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study
Melissa D. McCradden, Ami Baba, Ashirbani Saha, Sidra Ahmad, Kanwar Boparai, Pantea Fadaiefard, Michael D. Cusimano
Jan 2020, 8 (1) E90-E95; DOI: 10.9778/cmajo.20190151
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research
  • Google Scholar

Similar Articles

Collections

  • Nonclinical
    • Ethics
      • Confidentiality
      • Informed consent
    • Medical Informatics
      • Other medical informatics
    • Patients
      • Patient Information
      • Patients' views
  • Clinical
    • Neurology
      • Other neurology

Content

  • Current issue
  • Past issues
  • Collections
  • Alerts
  • RSS

Authors & Reviewers

  • Overview for Authors
  • Preparing manuscripts
  • Manuscript Submission Checklist
  • Publication Fees
  • Forms
  • Editorial Policies
  • Editorial Process
  • Patient-Oriented Research
  • Submit a manuscript
  • Manuscript Progress
  • Submitting a letter
  • Information for Reviewers

About

  • General Information
  • Staff
  • Editorial Board
  • Advisory Panel
  • Contact Us
  • Advertising
  • Media
  • Reprints
  • Copyright and Permissions
  • Accessibility
  • CMA Civility Standards
CMAJ Group

Copyright 2023, CMA Impact Inc. or its licensors. All rights reserved. ISSN 2291-0026

All editorial matter in CMAJ OPEN represents the opinions of the authors and not necessarily those of the Canadian Medical Association or its subsidiaries.

To receive any of these resources in an accessible format, please contact us at CMAJ Group, 500-1410 Blair Towers Place, Ottawa ON, K1J 9B9; p: 1-888-855-2555; e: [email protected].

View CMA's Accessibility policy.

 

Powered by HighWire