Skip to main content

Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety

Abstract

Background

The global health system remains determined to leverage on every workable opportunity, including artificial intelligence (AI) to provide care that is consistent with patients’ needs. Unfortunately, while AI models generally return high accuracy within the trials in which they are trained, their ability to predict and recommend the best course of care for prospective patients is left to chance.

Purpose

This review maps evidence between January 1, 2010 to December 31, 2023, on the perceived threats posed by the usage of AI tools in healthcare on patients’ rights and safety.

Methods

We deployed the guidelines of Tricco et al. to conduct a comprehensive search of current literature from Nature, PubMed, Scopus, ScienceDirect, Dimensions AI, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. In all, 80 peer reviewed articles qualified and were included in this study.

Results

We report that there is a real chance of unpredictable errors, inadequate policy and regulatory regime in the use of AI technologies in healthcare. Moreover, medical paternalism, increased healthcare cost and disparities in insurance coverage, data security and privacy concerns, and bias and discriminatory services are imminent in the use of AI tools in healthcare.

Conclusions

Our findings have some critical implications for achieving the Sustainable Development Goals (SDGs) 3.8, 11.7, and 16. We recommend that national governments should lead in the roll-out of AI tools in their healthcare systems. Also, other key actors in the healthcare industry should contribute to developing policies on the use of AI in healthcare systems.

Peer Review reports

Text box 1. Contributions to the literature

There is inadequate account on the extent and type of evidence on how:

• Artificial intelligent (AI) tools could commit errors resulting in negative health outcomes to patients

• The lack of clear policy and regulatory regimes of the application of AI tools in patient-care threaten the rights, privacy, and autonomy of patients

• AI tools may dim the active participation of patients in the care process

• AI tools could escalate the overhead cost of providing and receiving essential healthcare

• Faulty and manipulated data, and inadequate machine learning could result in bias and discriminatory services

Introduction

The global health system is facing unprecedented pressures due to the changing demographics, emerging diseases, administrative demands, dwindling and large migration of workforce, increasing mortality and morbidity, and changing demands and expectations in information technology [1, 2]. Meanwhile, the needs and expectations of patients are increasing and getting ever complicated [1, 3]. The global health system is thus, forced to leverage every opportunity, including the use of artificial intelligence (AI), to provide care that is consistent with patients’ needs and values [4, 5]. As expected, AI has become an obvious and central theme in the global narrative due to its enormous potential positive impacts on the healthcare system. AI, in this context, should be construed as capability of computers to perform tasks similar to those performed by human professionals even in healthcare [6, 7]. This includes the ability to reason, discover and extrapolate meanings, or learn from previous experiences to achieve healthcare goals artificially [4].

The term AI, a term credited to Sir John McCarthy since 1956, is vast, and it seems there is no consensus yet on what truly constitutes AI [8, 9]. AI is not a single type of technology, but many different types of computerised systems (hardware and software) that require large datasets to realise their full potential [10, 11]. AI tools are transforming the state of healthcare globally giving hope to patients with conditions that appear to defy traditional treatment techniques [1,2,3]. In clinical decision-making for instance, AI tools have improved diagnosis, reduced medical errors, stimulated prompt detection of medical emergencies, reduced healthcare cost, improved patient health outcomes, and facilitated public health interventions [3, 4]. Additionally, AI tools have facilitated workflow, improved turnaround time for patients, and also improved the accuracy and reliability of patients’ data.

The successes of the use of AI in healthcare seems promising, if not great already, but there is the need for caution. There is the need for moderation in the celebrations and expectations of the capabilities of AI tools in healthcare, because these tools also present threats yet to be fully understood and appreciated [6, 1213]. So far, there are serious concerns that AI tools could threaten the privacy and autonomy of patients [2, 11]. Moreover, widespread adoption and use of AI tools in healthcare could be confounded by factors such as lack of standardised patients’ data, inadequate curated datasets, and lack of robust legal regimes that clearly define standards for professional practice using AI tools [11]. Additionally, socio-cultural differences, lack of government commitment, proliferation of AI-savvy persons with malicious intents, irregular supply of electric power, and poverty (especially in the global south) are but a few of the many factors that may work against the potentials of AI tools in healthcare [14]. For instance, algorithms on which AI tools operate can be weaponised to perpetuate discrimination based on race, age, gender, sexual identity, socio-cultural background, social status, and political identity [15, 16]. Notwithstanding their immense capabilities, AI tools are but a means to an end and not an end in themselves.

There is also a growing concern over how AI tools could facilitate and perpetuate unprecedented “infodemic” of misinformation via online social media networks that threaten global public health efforts [17,18,19,20]. In fact, the pandemic of disinformation has led to the coining of the term “infodemiology”, now acknowledged by WHO and other public health organisations globally as an important scientific field and critical area of practice especially during major disease outbreaks [17,18,19,20]. Recognising the consequences of disinformation to patients’ rights and safety and the potential of AI tools in facilitating same, public health experts have suggested a tighter control over patients’ information, and advocated for eHealth literacy and science and technology literacy [17,18,19,20]. Additionally, the experts also suggested the need to encourage peer review and fact checking systems to help improve the knowledge and quality of information regarding patient care [17,18,19,20]. Furthermore, there is the need to eliminate delays in the translation and transmission of knowledge in healthcare to mitigate distorting factors such as political, commercial, or malicious influences, as was widely reported during the SARS-CoV-2 outbreak [17,18,19,20].

Moreover, it is difficult to demonstrate how the deployment of AI tools in healthcare is contributing to the realisation of the Sustainable Development Goals (SGDs) 3.8, 11.7, and 16. For instance, SDG 11.7 provides for universal access to safe, inclusive and accessible public spaces, especially for women and children, older persons and persons with disabilities [24]. Moreover, SDG 3.8 calls for the realisation of universal health coverage, including access to quality essential healthcare services and essential medicines and vaccines for all. SDG 16 advocates for peaceful, inclusive, and just societies for all and building effective, accountable and inclusive institutions at all levels [24]. Thus, to achieve these and many others, there are many questions to be answered.

For instance, will the usage of AI tools in their present situations help achieve these SGDs by 2030? What constitutes professional negligence of AI tools in healthcare? Who takes responsibility for the commissions and omissions of AI tools in healthcare? What remedies accrue to patients who suffer serious adverse events from care provided by AI tools? What are the implications of using AI tools in healthcare on insurance policies of patients? To what extent is an AI tool developer liable for the actions and inactions of these intelligent tools? What constitutes informed consent when AI tools provide care to patients? In the event of conflicting decisions between AI tools and human clinicians, which would hold sway? Obviously, a lot more research, including reviews, are needed to clearly and confidently respond to these and several other nagging questions. Despite considerable research globally on AI, majority of these research have been done in non-clinical settings [22, 23]. For instance, randomised controlled studies, the gold standard in medicine, are yet to provide further and better evidence on how AI adversely impacts patients [23]. Therefore, the objective of this review is to map current existing evidence on the perceived threats by AI tools in healthcare on patients’ rights and safety.

Considering the social implications, this review is envisaged to positively impact the development, deployment, and utilisation of AI tools in patient care services [3, 25,26,27,28,29]. This is anticipated as the review to interrogate the main concerns of the patients and the general public regarding the use of these intelligent machines. The preposition is that these tools have the possibility for unpredictable errors, couple with inadequate policy and regulatory regime, may increase healthcare cost and create disparities in insurance coverage, breach privacy and data security of patients, and provide bias and discriminatory services which can be worrying [2, 7, 10, 25]. Therefore, the review envisaged that manufacturers of AI tools will pay attention and factor these concerns into the production of more responsible and patient-friendly AI tools and software. Additionally, medical facilities would subject newly procured IA tools and software to a more rigorous machine learning regime that would allay the concerns of patients and guarantee their rights and safety [25,26,27]. Moreover, the review may trigger the formulation and review of existing policies at the national and medical facility levels, which would provide adequate promotion and protection of the rights and safety of patients from the adverse effects of AI tools [26,27,28].

Furthermore, there are practical implications of this review to the deployment and application of AI tools in patient care. For instance, this review would remind healthcare managers of the need to conduct rigorous machine learning and simulation exercises for AI tools before deploying them in the care process [1,2,3,4,5,6,7,8, 27,28,29]. Moreover, medical professionals would have to scrutinise decisions of the AI tools before making final judgements on patients’ conditions. Again, healthcare professionals would find a way to make patients active participants in the care process. Finally, the review would draw attention of researchers to the issues that could undermine the acceptance of AI tools in patients care services [1,2,3,4,5,6,7,8]. For instance, this review may inform future research direction that explores potential threats posed by AI tools to patients’ rights and safety.

Several reviews are published recently (between January 1, 2022 and June 25, 2024) on the application of AI tools and software use in healthcare [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48] (See Table 1). Almost halve (9 articles) of these recent reviews [30,31,32,33,34,35,36,37,38] explored the positives impacts of AI tools on healthcare services while almost halve (9 articles) [39,40,41,42,43,44,45,46,47] also examined both the positive and potential threats. Of these recent reviews, only one articles [48] studied the challenges pertaining to the adoption of AI tools in healthcare. Thus far, the current review provided a more focused and comprehensive perspectives to the threats posed by AI tools to patients’ rights and safety. The current review specifically interrogates the diverse and collates rich evidence from the perspectives of patients, healthcare workers, and the general public regarding the perceived threats posed by AI tools to patients’ rights and safety.

Table 1 Comparison between key findings in the current and previous reviews on the use of artificial intelligent tools in healthcare

Methods

We scrutinised, synthesised, and analysed peer review articles according to the guidelines by Tricco et al. [49]. Thus, (1) definition and examination of study purpose, (2) revision and thorough examination of study questions, (3) identification and discussion of search terms, (4) identification and exploration of relevant databases/search engines and download of articles, (5) data mining, (6) data summarisation and synthetisation of result, and (7) consultation.

Research questions

Six study questions guided this review. They are: (1) What are the implications of AI tools on medical errors? (2) What are the ethicolegal implications of AI tools to patient care? (3) What are the implications of AI tools on patients-provider relationship? (4) What are the implications of AI tools on the cost of healthcare and insurance coverage? (5) What are the potential threats of AI tools on patients’ rights and data security? And (6) What are the perceived implications of AI tools on discrimination and bias in healthcare?

Search strategy

We mapped evidence on the topic using the Preferred Reporting Items for Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) [49, 50]. We searched the following databases/search engines for peer review articles: Nature, PubMed, Scopus, ScienceDirect, Dimensions, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar (see Fig. 1; Table 2). To ensure the search process was rigorous and detailed, we first searched in the PubMed using Medical Subject Headings (MeSH) terms on the topic (see Table 2). The search was conducted at two levels based on the search terms. First, the search terms “Confidentiality” OR “Artificial Intelligence” produced 4,262 articles. Second, the search was guided using 30 MeSH terms and controlled vocabularies which also yielded 1,320 articles (see Fig. 1; Table 2).

Fig. 1
figure 1

PRISMA flow diagram of articles used in the current review conducted from January 1, 2010 to December 31, 2023

Table 2 Summary of search strategy applied in selecting articles for the current review conducted from January 1, 2010 to December 31, 2023

The search covered studies conducted between January 1, 2010 and December 31, 2023, because the use of AI in healthcare is generally new and mostly unknown to people in the past three decades. Moreover, we conducted the study between January 1 and December 31, 2023. Through a comprehensive data screening process, we separated all duplicate articles into a folder, which were later removed. These articles also included those that were inconsistent with the inclusion threshold (see Table 2). The initial screening was conducted by authors 4, 5, 6, 7, 8, and 9, but where the qualification of an article was in doubt, that article was referred to authors 1, 3, 4 and 10 for further assessment until consensus was reached. Moreover, 1 and 10 further reviewed the data. To enhance comprehension and rigour in the search process, citation chaining was conducted on all full-text articles that met the inclusion threshold to identify additional relevant articles for further assessment. Table 2 presents inclusion and exclusion criteria used in selecting relevant articles for this review.

Quality rating

We conducted a quality rating of all selected full-text articles based on the guideline prescribed by Tricco et al. [49]. Thus, the reviewed article must provide a research background, purpose, context, suitable method, sampling, data collection and analysis, reflectivity, value of research, and ethics. We assessed and scored all selected articles based on the set criteria [49]. Thus, articles which scored “A” had few or no limitation, “B” had some limitations, “C” had substantial limitations but possess value, and “D” carry substantial flaws that could compromise the study as a whole. Therefore, articles scoring “D” were removed from the review [49].

Data extraction and thematic analysis

All authors independently extracted the data. Authors 5, 6, 7, 8, and 9, extracted data on “authors, purpose, methods, and country”, while authors 1, 2, 3, 4, and 10 extracted data on “perceived threats and conclusions” (see Table 3). Leveraging on Cypress [51], Morse [52], qualitative thematic analysis was conducted by authors 1, 2, 3, 4, and 10. Data were coded and themes emerged directly from the data consistent with study questions [53, 54]. Specifically, the analysis included repeated reading of the articles to gain deep insight into the data. We further created initial candidate codes, identified and examined emerging themes. Additionally, candidate themes were reviewed, properly defined and named, and extensively discussed until a consensus was reached. Finally, we composed a report and extensively reviewed it to ensure internal and external cohesion of the themes (see Table 4).

Table 3 Overview of data extracted from articles on the perceived threats of artificial intelligent tools in healthcare
Table 4 Summary of themes emerging from articles on the perceived threats of artificial intelligent tools in health care

Results

This scoping review covered 2010 to 2023 on the perceived threats of AI use in healthcare on the rights and safety of patients. We screened 1,320, of which 519(39%) studied AI application in healthcare, but only 80(15%) met the inclusion threshold, passed the quality rating and were included in this review. From the 80 articles, 48(60%) applied quantitative approach, 23(29%) qualitative, and 9(11%) mixed method. The 80 articles covered 2023–1(1.25%), 2022–7(8.75%), 2021–24(30%), 2020–21(26.25%), 2019–9(11.25%), 2018–7(8.75%), 2017–7(8.75%), 2016–1(1.25%), 2015–2(2.5%), and 2014–1(1.25%). This shows that the years 2020 and 2021 alone accounted for majority (56.25%) of the articles under review. Furthermore, 26(32.5%) of the articles came from Asia alone, 22(27.5%) from only North America, 18(22.5%) from only Europe, 5(6.25%) from only Australia, 5(6.25%) from only South America, 2(2.5%) from only Africa, 1(1.25%) from North America and Asia and 1(1.25%) from North America and Europe (see Fig. 2 below).

Fig. 2
figure 2

Geographical distribution of articles used in the current review

Perceived unpredictable errors

We report that majority of the articles reviewed revealed a widespread concern over the possibility of unpredictable errors associated with the use of AI tools in patient care. Of the 80 articles reviewed, 56(70%) [2, 55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107] reported the concern of AI tools committing unintended errors during care. Consistent with the operations of all machines, be it intelligent or not, AI tools could commit errors with potentially immeasurable consequences to patients [60,61,62,63,64,65, 100, 103, 106]. This has triggered some level of hesitation and suspicion for AI applications in healthcare [2, 57, 63, 70]. Perhaps, because the use of AI tools in healthcare is largely new and still emerging, the uncertainties and suspicions about their abilities and safety are largely in doubt [1, 3, 6, 25,26,27,28,29]. Moreover, there are centuries of personal and documented accounts of medical errors (avoidable or not) within the healthcare industry, but it is doubtful who becomes responsible or liable if such AI tools commute errors (see Figs. 3 and 4).

Inadequate policy and regulatory regime

The public was also seriously concerned about lack of adequate policies and regulations, specifically on AI use in healthcare, that define the legal and ethical standards of practice. This is evident in 29(36%) of the articles [56, 58,59,60, 78, 79, 89, 72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94, 96, 97, 101, 108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124] reviewed in this study. As with all machines, AI tools could get it wrong [56, 78, 79, 94, 97, 101], through malfunction, with potentially terrible consequences to the health and well-being of patients. Thus, where lies the burden of liability in case of breach of duty of care, privacy, trespass, or even negligence? There were no specific regulations on AI use in healthcare to respond to the scope and direction of liability for ‘professional misconducts’ of intelligent [59, 60, 78, 79, 96, 97, 99], or unintelligent conducts of the machines. This finding is anticipated because the healthcare sector is already characterised by disputes between patients and the medical facilities (including their agents) [12, 22, 23, 48]. Generally, patients want to be clear on what remedies accrue to them when there is a breach in duty of care. Moreover, the healthcare professionals on their part want to be clear on who takes responsibility when AI tools provide care that is sub-optimal [12, 22, 48]. Somebody must be responsible, is it the AI tool, manufacturer, healthcare facility or who?

Perceived medical paternalism

The application of AI tools could also interfere with the traditional patient-doctor interactions and potentially undermine patient satisfaction and the overall quality of care. This was reported by 22(27%) of the articles reviewed [2, 55, 60, 79, 84, 92, 96, 97, 99,100,101,102, 116, 118, 125,126,127,128,129,130,131,132]. We argued that AI tools lacked adequate humanity required in patient care. Though AI tools may have the ability to better predict the moods of patients, they may not be trusted to competently provide very personal and private services, such as psychological and counselling care [2, 55, 84, 92, 101, 102, 116, 118]. Thus, the personal and human touch that define the relationship between patients and human clinicians may not be guaranteed through AI applications [2, 97, 99, 100, 118, 125, 129, 132]. It is highly expected that patients will fear losing the opportunity to interact directly with human caregivers (through verbal and non-verbal cues) [11,12,13,14,15, 23]. The question is, is the use of AI tools sending patient care back to the application of biomedical model in healthcare? Therefore, the traditional human-to-human interactions of patients and the medics may be lost when machine clinicians replace human clinicians in patient care [11,12,13,14,15, 23].

Increased healthcare cost and disparities in insurance coverage

Evidence also showed the public is concerned that the use of AI tools will increase the cost of healthcare and insurance coverage 7(9%) [2, 76, 77, 119, 122, 133]. Given that adoption of AI tools in healthcare could be capital-intensive and potentially inflate operational cost of care, patients are likely to be forced to pay far more for services beyond their economic capabilities. Moreover, most health insurance policies have not yet cover services provided by AI tools leading to dispute in the payment of bills for services relating to AI applications [2, 77, 119, 133]. Already, healthcare cost is one major concern for patients globally [7, 11, 16, 27]. Therefore, it is legitimate for patients and the public to become anxious about the possibility of AI tools worsening the rising cost of healthcare and triggering disparities in health insurance coverage. Cost of machines learning, cost of maintenance, cost of data, cost of electricity, cost of security and safety of AI tools and software, cost of training and retraining of healthcare professionals in the use of AI tools, and many other related costs could escalate the overhead cost of providing and receiving essential healthcare services [7, 11, 16, 27].

Breach of privacy and data security

We report that the public is concerned about the breach of patient privacy and data security by AI tools. As reported by 5(7%) of the articles [2, 55, 79, 81, 119, 123] reviewed, AI tools have the potential to gather large volumes of patient data in a split of a second, sometimes at the blind side of the patients or their legal agents. As argued by Morgenstern et al. [79] and Richardson et al. [2], given their sheer complexity and automated abilities, it will be difficult to foretell when and how a specific patient data are acquired and used by AI tools, a tuition the presents a ‘black box’ for patients. Thus, apart from what the patient may be aware of, there was no surety of what else these machine clinicians could procure, albeit unlawfully, about the patient. Furthermore, it is unclear how patient data are indemnified against wrongful use and manipulation [2, 119, 123]. These AI tools could, wittingly or unwittingly disclose privileged information about a patient with potentially dire consequences for the privacy and security of patients. It is expected that patients would be apprehensive about the privacy and security of their personal information stored by AI tools [5, 8, 10,11,12,13,14,15,16]. Given that these AI tools could act independently, patients would naturally be worried about what happens to their personal information.

Potential for bias and discriminatory services

The results further suggest that there is potential for discrimination and bias on a large scale when AI tools are used in healthcare. As reported by 5(6%) of the articles [2, 57, 79, 89, 112] we reviewed, the utility of AI is a function of its design and the quality of training provided [2, 57, 112]. In effect, if the data used for training these machines discriminate against a population or group, this could be perpetuated and potentially be escalated when AI tools are deployed on a wider scale to provide care [2, 57, 79]. Thus, AI tools could perpetuate and escalate pre-exiting biases and discrimination, leaving affected populations more marginalised than ever [2, 89, 112]. A common feature in healthcare globally is the issues of bias and discrimination in the patient care [8, 13, 37]. Therefore, the fear that AI tools could be setup to provide bias and discriminatory care is both real and legitimate, because their actions and inactions are based on the data and machine learning provided [8, 13, 37].

Discussion

There is a steady growth in AI research across diverse disciplines and activities globally [1, 134]. However, previous studies [4, 23] raised concerns about the paucity of empirical data on AI use in healthcare. For instance, Khan et al. [23] argued that majority of studies on AI usage in healthcare are unevenly distributed across the world and many are also conducted in non-clinical environments. Consistent with these findings, the current review showed that there is inadequate empirical evidence on the perceived threats of AI use in healthcare. Of the 519 articles on AI use in healthcare, only 80(15%) met the inclusion threshold of our study. Moreover, affirming findings from the previous studies [21, 135], we found uneven distribution of these selected articles across the continents, with majority (n = 66; 82.5%) coming from three continents; Asia (n = 26; 32.5%), North America – (n = 22; 27.5%), and Europe – (n = 18; 22.5%). We discussed our review findings under perceived unpredictable errors, inadequate policy and regulatory regime, perceived medical paternalism, increased healthcare cost and disparities in insurance coverage, perceived breach of privacy and data security, and potential for bias and discriminatory services.

Perceived unpredictable errors

There is little contention of the capacity of AI tools to significantly reduce diagnostic and therapeutic errors in healthcare [10, 138,139,140]. For instance, the huge data processing capacity and novel epidemiological features of modern AI tools are very effective in the fight against complex infectious diseases such as the SARS-CoV-2 and a game-changer in epidemiological research [140]. However, previous studies [1, 12, 22] found that AI tools are limited by factors that could undermine their efficacy and produce adverse outcomes on patients. For instance, power surges, poor internet connectivity, flawed data and faulty algorithms, and hacking could confound the efficacy of IA applications in healthcare. Indeed, hacking and internet failure could constitute the most dangerous threats to the use of AI tools in healthcare especially in resource-limited countries where internet speed and penetration are very poor [8,9,10,11,12,13]. Furthermore, we found that fear of unintended harm on patients by AI tools was widely reported by the articles (70%) we reviewed. For instance, potential for unpredictable errors were raised in a study that investigated perspectives about AI use in healthcare [99]. Similarly, Meehan et al. [141] argued that the generalizability and clinical utility of most AI applications are yet to be formally proven. Besides, concerns over AI related errors featured in a study on diagnostic performance, feasibility, and end-user experiences of AI assisted diabetic retinopathy [88]. Also, in the application of an AI-based Decision Support System (DSS) in the emergency department [81], such error concerns were raised.

The evidence is that, patients were in fear of being told “we do not know what went wrong”, when AI tools produce adverse outcomes [22]. This is because errors of commission or omission are associated with all machines, including these machines clinicians, whether intelligent or not [12, 22]. Therefore, there is merit in the argument that AI tools should be closely monitored and supervised to avoid or at least minimise the impact of unintended harms to patients [138]. We are of the view that the attainment of universal health coverage, including access to quality essential healthcare services, medicines and vaccines for all by 2030 (SDG 3.8) could be accelerated through evidence-based application of AI tools in healthcare provision [20]. Thus, given that the use of AI tools in healthcare is generally new and still emerging [7, 9, 15, 25,26,27,28,29], the uncertainties and suspicions about the trustworthiness of such tools (that is their capabilities and safety) are natural reactions that should be expected from patients and the general public. However, these concerns could ultimately slowdown the achievement of the SDG 3.8. Moreover, there are a lot of occurrence of medical errors (avoidable or not) within the healthcare industry with dire consequences to patients [13, 29, 30, 37]. Thus, the finding comes as no surprise because medical care has always been characterised by uncertainties and unpredictable outcomes with dire consequences to patients, families, facilities and the health system [4, 9, 28, 31].

Inadequate policy and regulatory regime

The fragility of human life requires that those in the healthcare business are held to the highest standards of practice and accountability [13, 24, 137]. Previous studies [10, 22, 136] argued that healthcare must be delivered consistent with ethicolegal and professional standards that uphold the sanctity of life and respect for individuals. In keeping with this, our review showed that the public is worried about the lack of adequate protection against perceived infractions, deliberate or not, by AI tools in healthcare. Concerns over the lack of a clear policy regime to regulate the use of AI applications in patient care featured in a study that integrated a deep learning sepsis detection and management platform, sepsis watch, into routine clinical care [92]. Similar concerns were raised in a study that evaluated consecutive patients for suspected Acute Coronary Syndrome [11]. Moreover, another evidence that used the Neural Network (NN) algorithm-based models to improve the Global Registry of Acute Coronary Events (GRACE) score performance to predict in-hospital mortality and acute Coronary Syndrome [10] similar concerns were found.

The contention is that, existing policy and legal frameworks are not adequate and clear enough on what remedies accrue to patients who suffer adverse events during AI care. Our view is that patients may be at risk of a new form of discrimination, especially targeted at minority groups, persons with disabilities, and sexual minorities [14]. The need for a robust policy and regulatory regime is urgent and apparent to protect patients from potential exploitation by AI tools. This finding is not strange, because the healthcare sector is already being regulated with policies which covering the various services [11, 30, 31, 35]. Moreover, because patients are normally the vulnerable parties in the patient-healthcare provider relationship [30, 32,33,34,35,36], we argue that patients would seek adequate protection from the actions and inactions of AI tools, but unfortunately, these machine tools may not have the capabilities. Moreover, human clinicians should be equally concerned about who takes responsibility for infractions of these machine clinicians during patents care [8, 24, 29, 35]. Therefore, there is the need for policy that clearly define and meaning to the scope and nature of liability of the relationship between humans and machine clinicians during patient care.

Perceived medical paternalism

Intelligent machines hold tremendous prospects for healthcare, but human interaction is still invaluable [3, 21, 141,142,143,144]. According to Checkround et al. [145], the overriding strength of AI models in healthcare is their super-abilities to leverage large datasets to foretell and prescribe the most suitable course of intervention for prospective patients. Unfortunately, the ability of AI models to predict treatment outcomes in Schizophrenia, for example, are highly context-dependent and have limited generalizability. Our review revealed that the public is equally worried that AI tools could limit the quality of interaction between patients and human clinicians. So, through empathy and compassion, human clinicians are better able to procure effective patient participation in the care process and reach decisions that best serve the personal-cultural values, norm and perspectives of the patients [143].

We found that as AI tools provide various services and care, human clinicians may end up losing some essential skills and professional autonomy [24]. For example, concerns over reduction in critical thinking and professional autonomy was raised in some studies, including a study that used socio-technical system to implement a computer-aided diagnosis [97], adherence to antimicrobial prescribing guidelines and Computerised Decision Support Systems (CDSSs) adoption [12] and barriers and facilitators to the uptake of an evidence-based Computerised Decision Support Systems (CDSS) [64]. Thus, human medics need to take a lead role in the care process and cease every opportunity to continually practice and improve their skills. We believe that because patients normally would want to interact directly with human clinicians (through verbal and non-verbal cues) and be convinced that the conditions of the patient are well understood by human beings [2, 3, 16, 31]. Typically, patients want to build cordial relationship that is based on trust with their human clinicians and other human healthcare professionals. However, this may not be feasible when AI clinicians are involved in the care process [11, 16, 26], especially dosing so independently. Therefore, the traditional human-to-human interactions between the patients and the human medics may be lost when machine clinicians takeover patient care.

Increased healthcare cost and disparities in insurance coverage

Globally, the cost of healthcare seems to be too high for the average person [24], but the usage of AI tools could reverse this and make things better [10, 23, 144]. A large body of literature [1, 10, 12, 23, 144] showed that deploying AI tools in healthcare could actually reduce the cost of care for providers and patients. However, we found that the public was of the opinion that AI tools could escalate the cost of healthcare [2, 76, 77, 119, 122, 133], especially for those in the developing world such as Africa. The reason is that healthcare facilities would have to procure, operate and maintained, where the cost is certainly going to be shifted to the patients [2]. For instance, in addition to the concerns over cost of care, limited insurance coverage was a concern raised in the use of AI-based Computer-Assisted Diagnosis (CADx) in training healthcare workers [67]. Similar concerns featured in a study that explored costs and yield from systematic HIV-TB screening, including computer-aided digital chest X-Ray test [68]. Similar concerns were found in another study involving the use of a medical-grade wireless monitoring system based on wearable and AI technology [103].

Furthermore, some of our reviewed articles [2, 12] reported that most health insurance companies were yet to incorporate AI medical services into their policies. This situation has implications for health equity and universal health coverage. We contend that the promotion of inclusive and just societies for all and building effective, accountable, and inclusive institutions at all levels by 2030 (SGD 16) may not be achieved without affordable and accessible healthcare, including the use of advanced technology like AI in health [24]. Thus, governments need to financially support healthcare facilities, especially governments in the developing world, to implement AI services and ensure that costs do not increase health disparities and rather reduce health inequalities. The cost of healthcare is one of the major barriers to access to quality healthcare services globally [7, 9, 13,14,15,16,17,18,19,20,21,22,23]. Therefore, patients and the public are anxious about how the use of AI tools in patient care may further control the cost of healthcare services. The cost of machine learning, cost of maintenance, cost of data, cost of electricity, cost of security and safety for the AI tools and software, cost of training and retraining of healthcare professionals in the use of AI tools, and many other related costs could escalate the overhead cost of providing and receiving essential healthcare services [7, 16, 25], but disproportionately precarious in resourced-limited societies.

Perceived breach of privacy and data security

The fundamental obligation of a healthcare system is to provide reasonable privacy for all patients and ensure adequate protection of patients’ data from malicious use [9, 11, 16]. Some studies [12, 136] suggested that AI tools in healthcare guarantee better protection for patients’ privacy and data. Contrary to this, our review found that the public is worried that AI tools may undermine patient privacy and data security. This is because the existing structures for upholding patient privacy and data integrity are grossly inadequate [2, 7]. For example, patients’ privacy and data security concerns were raised in studies that investigated the interactions between healthcare robots and older patients [8]. Similarly concerns were raised investigated AI in healthcare [23], and public perception and knowledge of AI use in healthcare, therapy, and diagnosis [102].

There seems to be merit in these fears because of the paucity of evidence to the contrary. Moreover, the current review found that AI tools could wittingly or unwittingly disclose privileged information about patient. Such a situation has potential for dire consequences to patients, including job loss, stigma, discrimination, isolation, and breakdown of relationships, trust and result in legal battles [11]. It is our view that because the use of AI tools in patient care is still emerging most patients are not very familiar with these tools and are also certain about their trustworthiness of these machine clinicians [9,10,11,12,13,14,15,16]. Therefore, it is very natural that patients would be apprehensive about the privacy and security of their information procured and stored by non-human medics that could not be questioned. These concerns are widespread because of the capacity AI tools to act independently [11, 16].

Potential for bias and discriminatory services

Algorithms based on flawed or limited data could trigger prejudices of racial, cultural, gender, or social status [2, 4, 11, 15, 24]. For instance, previous studies [3, 12, 15, 24] reported that pre-existing and new forms of biases and discrimination against underrepresented groups could be worsened in the absence of responsible AI tools. We found that the public is concerned about the potential of AI tools discriminating against specific groups. For instance, such fears were raised in a study that assessed consecutive patients for suspected Acute Coronary Syndrome. Similar concerns were found in a study that determined the impact of AI on public health practices [72], and another study that explored the views of patients about various AI applications in healthcare [87]. Thus, the public strongly advocates for effective human oversight and governance to deflate potential excesses of AI tools during patient care [2, 4, 15, 24]. Thus, we believe that algorithms employed by AI tools should not absolve medics and their facilities from responsibility. We further contend that until the necessary steps are taken, AI usage in healthcare could undermine the SDG 11.7, for universal access to safe, inclusive, and accessible public spaces for all by 2030 [24]. The evidence is that patients and the public are generally aware of bias and discriminatory services at many medical facilities [4, 11, 15]. Therefore, the fear that AI tools could be deliberately setup to provide biased and racialised care that may compromise rather than improve health outcomes [4, 15].

Limitations

Notwithstanding the contributions of this study to the body of knowledge and practice, there are some limitations noteworthy. First, the use of only primary studies written in the English language may limit the literature sampled. Therefore, future research direction may resolve this by broadening the literature search beyond English Language. Therefore, future research needs to broaden the literature beyond the scope of the current review. Additionally, future research direction may have to leverage software that could translate articles written in other languages into the English Language to make future reviews far more representative than the current review. Besides, articles that have failed the inclusion criteria may have contained very useful information on the topic, so revising the inclusion and exclusion criteria could help increase the article base of future reviews. Moreover, we recognise that the current review may have inherited some weaknesses and biases from the included articles. Therefore, we acknowledge that the interpretation of some findings of this review, for instance the perceived medical paternalism, disparities in insurance coverage, bias and discriminatory services, may differ across the globe. Thus, future research direction may have to reflect carefully over the context of the candidate articles before drawing conclusions on the findings. Additionally, it is proposed that future research direction carefully examine the limitations reported in the included articles to shape the discussion and conclusions reached. This would help improve the overall reliability of the findings and conclusions reached by future reviews.

Possible future research direction

Comparing the previous and later approaches and interventions at addressing the challenges in patient care, AI tools are emerging as arguably the most promising technology for better health outcomes for patients. While AI tools have so far made noteworthy impacts on the healthcare industry, key actors (such as the healthcare professionals, patients, and the general public) have expressed concerns which need further and better interrogation. Therefore, it would be appropriate for future researchers to lead and shape the debate on the potential threats of the use of AI tools in healthcare and ways to address such threats. For instance, future research can focus on how AI tools compromise the total quality of care to sexual minorities, especially, in Africa and the developing world in general. This is necessary, given that this group remains largely marginalised from accessing basic healthcare services. Additionally, future research direction may deliberately and comprehensively examine how AI tools promote racialised healthcare services and make proposals for redress.

Furthermore, a future research may probe the challenges and quality of machine learning, especially, in Africa and the developing world in general. Also, future research direction could examine existing legal and policy frameworks (by comparing the situation across continents) regarding the use of AI tools in patient care. Additionally, a future research direction could look at how AI tools may contribute to the realisation of the health-related SDGs. Findings from such future research could be leveraged to improve and make AI tools more efficient, acceptable, safer, accessible, culturally sensitive, and cost effective for all. Finally, a future research direction may investigate how AI tools are contributing to disinformation which could be undermining patients’ rights and safety. This is importance given how “infodemic” of false information undermined the global fight against the SAR-CoV-2 pandemic [17,18,19,20]. This will help guarantee more effective and efficient approaches to upholding patients’ rights and safety during crisis such as pandemics and epidemics.

Contribution to body of knowledge

Several reviews explored the use of AI tools in healthcare [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. While acknowledging the significant contributions of previous reviews to the field, the current review provides some novelty. The current review provided a more detailed and comprehensive outlook to the subject by focusing specifically on the potential threats intelligent tools pose to patient care. This is significant because most of the previous studies explored both the prospects and threats of AI use in healthcare. Even the few previous studies that focused on the potential threats of AI use in healthcare, are limited in scope and depth. Furthermore, while the previous reviews either considered only patients, healthcare providers with fewer articles, the current study examined AI use in health from diverse perspectives, including patients, healthcare professionals, and the general public and others using a large volume of data (80 articles) in this fast pace AI revolution. While no single study could exhaustively address all issues on the subject because of the explosion of the literature in the AI tools, the current review emphasised the need to pay attention to issues that matter to both the patients and experts in the field of patients care. Certainly, producers and designers of AI machines and software, experts in AI machine learning, medics, and governments across the world would find that findings of the current review useful in to make AI tools and software safer, efficient, cost effective, user friendly, and culturally sensitive.

Suggestions to addressing potential threats by AI tools in patients care

Healthcare professionals, manufacturers and designers of AI tools and software, and policy makers may benefit from the following suggestions to improve and make AI tools and allied devices safer, efficient, cost effective, culturally sensitive, and more accessible to all.

  • To ensure greater efficiency and fully optimise AI tools and software, healthcare managers need to graduate the deployment and use of the machines. Therefore, the AI tools and software should be subjected to rigorous machine learning regime using rich and robust data. The machine learning could start with a small dataset and later increased to large dataset with diverse characteristics.

  • Manufacturers and designers of AI tools and related machines need to collaborate with healthcare experts and researchers, coalition and experts in patient rights, and experts in medicolegal issues to ensure responsible usage of AI tools and software in healthcare.

  • Governments need to commission a team composed of healthcare experts and researchers, coalition and experts in patient rights, manufacturers and designers, and experts in medicolegal issues to develop policies for AI use in healthcare.

  • Healthcare managers could commission a team (composed of medical experts and managers) to verify decisions of AI tools during patient care. This would help ensure that patients are protected from ill decisions of AI tools during care.

Conclusions

We report that the use of AI tools is fast emerging in the global healthcare systems. While these tools hold enormous prospects for global health, including patient care, they present potential threats that are worthy of note. For instance, there is potential for breach of patients’ privacy and AI tools could trigger prejudices against race, culture, gender, or social status. Moreover, AI tools could commit errors that may harm or compromise patient’s quality of health, or health outcomes. Additionally, AI tools could also limit active patient participation in the care process resulting in a machine-centred care and deprive patients of psycho-emotional aspects of care. Furthermore, AI tools could potentially increase the cost of care and may even result in dispute between patients and insurance companies, generating different dimension of legal disputes. Unfortunately, there are inadequate policies and regulations that define ethicolegal and professional standards for the use of AI tools in healthcare. Clearly, these issues could undermine our quest towards the realisation of the SDGs 3.8, 11.7, and 16. To change the narrative, governments should commit to the development and deployment, and responsible use of AI tools in healthcare.

To ensure greater efficiency and fully optimise AI tools and software, healthcare managers could subject AI tools and software to rigorous machine learning regimes using rich and robust data. Also, manufacturers and designers of AI tools need to collaborate with other key stakeholders in healthcare to ensure responsible use of AI tools and software in patient care. Additionally, governments need to commission a team of AI and health experts to develop policies on AI use in healthcare.

Fig. 3
figure 3

Summary of key findings from previous reviews on the use of artificial intelligent tools in healthcare

Fig. 4
figure 4

Summary of key findings in current review on the use of artificial intelligent tools in healthcare

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

AI:

Artificial intelligence

SDGs:

Sustainable Development Goals

PRISMA–ScR:

Reviews and Meta Analyses extension for Scoping Reviews–

MeSH:

Medical Subject Headings

References

  1. Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med. 2019. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/0141076818815510.

    Article  PubMed  Google Scholar 

  2. Richardson JP, Smith C, Curtis S, Watson S, Zhu X, Barry B, et al. Patient apprehensions about the use of artificial intelligence in healthcare. Npj Digit Med. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41746-021-00509-1.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organisation. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.2471/BLT.19.237198.

    Article  Google Scholar 

  4. Rubeis G. iHealth: the ethics of artificial intelligence and big data in mental healthcare. Internet Interventions. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.invent.2022.100518.

  5. Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial intelligence for healthcare: a framework for AI developers. AI Ethics. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s43681-022-00195-z.

    Article  Google Scholar 

  6. Chen C, Ding S, Wang J. Digital health for aging populations. Nat Med. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41591-023-02391-8.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fsurg.2022.862322.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Bahl AK. Artificial intelligence and healthcare. J Clin Diagn Res. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.7860/jcdr/2022/56148.17020.

    Article  Google Scholar 

  9. Khalid N, Qayyum A, Bilal M, Al-Fuqaha A, Qadir J. Privacy-preserving artificial intelligence in healthcare: techniques and applications. Comput Biol Med. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.compbiomed.2023.106848.

    Article  PubMed  Google Scholar 

  10. Radanliev P, De Roure D. Advancing the cybersecurity of the healthcare system with self-optimising and self-adaptative artificial intelligence (part 2). Health Technol. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s12553-022-00691-6.

    Article  Google Scholar 

  11. Wang Y, Chen TT, Chiu M. A systematic approach to enhance the explainability of artificial intelligence in healthcare with application to diagnosis of diabetes. Healthc Analytics. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.health.2023.100183.

    Article  Google Scholar 

  12. Horgan D, Romao M, Morré SA, Kalra D. Artificial intelligence: power for civilisation - and for Better Healthcare. Public Health Genomics. 2019. https://doiorg.publicaciones.saludcastillayleon.es/10.1159/000504785.

    Article  PubMed  Google Scholar 

  13. Lord R, Roseen D. Why should we care? In do no harm. New America. 2019; http://www.jstor.org/stable/resrep19972.6. Accessed 13 Jun 2023.

  14. Center of Intellectual Property and Technology Law (CIPTL). State of AI in Africa 2023. Nairobi, Kenya: Author. 2023; https://creativecommons.org/licenses/by-nc-sa/4.0. Accessed 13 Jun 2023.

  15. Cataleta MS. Humane artificial intelligence: The fragility of human rights facing AI. East-West Center. 2020; http://www.jstor.org/stable/resrep25514. Accessed 13 Jun 2023.

  16. Davenport TH, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019. https://doiorg.publicaciones.saludcastillayleon.es/10.7861/futurehosp.6-2-94.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Zarocostas J. How to fight an infodemic. Lancet. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0140-6736(20)30461-X.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Eysenbach G. How to fight an infodemic: the four pillars of infodemic management. J Med Internet Res. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/21820.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Hang CH, Yu P-D, Chen S, Tan CW, Chen G. MEGA: machine learning-enhanced graph analytics for infodemic risk management. IEEE J Biomedical Health Inf. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/JBHI.2023.3314632.

    Article  Google Scholar 

  20. Gallotti R, Valle F, Castaldo N, Sacco P, De Domenico M. Assessing the risk of infodemic in response to COVID-19 epidemics. Nat Hum Behav. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41562-020-00994-6.

    Article  PubMed  Google Scholar 

  21. Manso JA, Ferrer RT, Pidevall I, Ballester J, Martin-Fumadó C. Use of photography in dermatology: ethical and legal implications. 2020; https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.adengl.2019.04.020

  22. Alami H, Lehoux P, Denis J-L, Motulsky A, Petitgand C, Savoldelli M, et al. Organisational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organisation Manage. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1108/JHOM-03-2020-0074.

    Article  Google Scholar 

  23. Khan B, Fatima H, Qureshi A, Kumar S, Hanan A, Hussain J, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomedical Mater Devices (New York N Y). 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s44174-023-00063-2.

    Article  Google Scholar 

  24. World Health Organisation. Ethical use of artificial intelligence: Principles, guidelines, frameworks and human rights standards. In WHO consultation towards the development of guidance on ethics and governance of artificial intelligence for health: Meeting report. Geneva, Switzerland: World Health Organisation; 2021a; http://www.jstor.org/stable/resrep35680.8. Accessed 13 Jun 2023.

  25. Gupta P, Maharaj T, Weiss M, Rahaman N, Alsdurf H, Minoyan N, et al. Proactive contact tracing. PLOS Digit Health. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pdig.0000199.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Hang C-N, Tsai Y-Z, Yu P-D, Chen J, Tan C-W. Privacy-enhancing digital contact tracing with machine learning for pandemic response: a comprehensive review. Big Data Cogn Comput. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/bdcc7020108.

    Article  Google Scholar 

  27. International Labour Organisation. World employment and social outlook. CH-1211, Geneva 22, Switzerland: International Labour Office. 2024; https://doiorg.publicaciones.saludcastillayleon.es/10.54394/HQAE1085

  28. Shaheen MY. AI in Healthcare: medical and socio-economic benefits and challenges. Preprint. 2021; https://doiorg.publicaciones.saludcastillayleon.es/10.14293/S2199-1006.1.SOR-PPRQNI1.v1

  29. Shaheen MY. Application of artificial intelligence (AI) in healthcare: a review. Preprint. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.14293/S2199-1006.1.SOR-PPRQNI1.v1.

    Article  Google Scholar 

  30. Al Kuwaiti A, Nazer K, Al-Reedy A, Al-Shehri S, Al-Muhanna A, Subbarayalu AV, Al Muhanna D, Al-Muhanna FA. A review of the role of artificial intelligence in healthcare. J Pers Med. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/jpm13060951.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Alnasser B. A review of literature on the economic implications of implementing artificial intelligence in healthcare. E-Health Telecommunication Syst Networks. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.4236/etsn.2023.123003.

    Article  Google Scholar 

  32. Botha NN, Ansah EW, Segbedzi CE, Dumahasi VK, Maneen S, Kodom RV, Tsedze IS, Akoto LA, Atsu FS. Artificial intelligent tools: evidence–mapping on the perceived positive effects on patient–care and confidentiality. BMC Digit Health. 2024. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s44247-024-00091-y.

    Article  Google Scholar 

  33. Kitsios F, Kamariotou M, Syngelakis AI, Talias MA. Recent advances of artificial intelligence in healthcare: a systematic literature review. Appl Sci. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/app13137479.

    Article  Google Scholar 

  34. Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, Dhar M. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/frai.2023.1227091.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello C, Stephan A. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. Npj Digit Med. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41746-023-00852-5.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. J Med Artif Intell. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.21037/jmai-21-25.

    Article  Google Scholar 

  37. World Health Organisation. Global review of the role of artificial intelligence and machine learning in health-care financing for UHC. Geneva, Switzerland: World Health Organisation; 2023; http://creativecommons.org/lincenses/by-nc-sa/3.0/igo

  38. Wu H, Lu X, Wang H. The application of artificial intelligence in health care resource allocation before and during the COVID-19 pandemic: scoping review. JMIR. 2023. https://ai.jmir.org/2023/1/e38397

  39. Ahsan MM, Luna SA, Siddique Z. Machine-learning-based disease diagnosis: a comprehensive review. Healthcare. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/healthcare10030541.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Ali O, Abdelbaki W, Shrestha A, Elbasi E, Alryalat MAA, Dwivedi YK. A systematic literature review of artificial intelligence in the healthcare sector: benefits, challenges, methodologies, and functionalities. J Innov Knowl. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jik.2023.100333.

    Article  Google Scholar 

  41. Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Saleh KB, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-023-04698-z.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Kooli C, Al Muftah H. Artificial intelligence in healthcare: a comprehensive review of its ethical concerns. Technological Sustain. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1108/TECHS-12-2021-0029.

    Article  Google Scholar 

  43. Kumar P, Chauhan S, Awasthi KL. Artificial intelligence in healthcare: review, ethics, trust challenges & future research directions. Eng Appl Artif Intell. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.engappai.2023.105894.

    Article  Google Scholar 

  44. Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied artificial intelligence in healthcare: a review of computer vision technology application in hospital settings. J Imaging. 2024. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/jimaging10040081.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Mohamed Fahim J. A review paper on artificial intelligence in healthcare. Int J Eng Manage Humanit (IJEMH). 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.13140/RG.2.2.25981.23529.

    Article  Google Scholar 

  46. Olawade DB, Wada OJ, David-Olawade AC, Kunonga E, Abaire O, Ling J. Using artificial intelligence to improve public health: a narrative review. Front Public Health. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpubh.2023.1196397.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Rubaiyat M, Mondal H, Podder P, Bharati S. A review on explainable artificial intelligence for healthcare: why, how, and when? Med Comput Sci. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TAI.2023.3266418.

    Article  Google Scholar 

  48. Aldwean A, Tenney D. Artificial intelligence in healthcare sector: a literature review of the adoption challenges. Open J Bus Manage. 2024. https://doiorg.publicaciones.saludcastillayleon.es/10.4236/ojbm.2024.121009.

    Article  Google Scholar 

  49. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMAScR): Checklist and explanation. Ann Intern Med. 2018. https://doiorg.publicaciones.saludcastillayleon.es/10.7326/M18-0850.

    Article  PubMed  Google Scholar 

  50. Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between systematic and scoping review approach. BMC Med Res Methodol. 2018. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s128018-0611-x.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Cypress BS. Rigor or reliability and validity in qualitative research, perspectives, strategies, reconceptualisation and recommendations. Dimens Crit Care Nurs. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.1097/DCC.0000000000000253.

    Article  PubMed  Google Scholar 

  52. Morse JM. Critical analysis of strategies for determining rigor in qualitative inquiry. Qual Health Res. 2015. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/1049732315588501.

  53. Sundler AJ, Lindberg E, Nilsson C, Plamer L. Qualitative thematic analysis based on descriptive phenomenology. Nurs Open. 2019. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/nop2.275.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Van Wijngaarden E, Meide HV, Dahlberg K. Researching health care as a meaningful practice: towards a nondualistic view on evidence for qualitative research. Qual Health Res. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/1049732317711133.

    Article  PubMed  Google Scholar 

  55. Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients. Digit Health. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/20552076221116772.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Al’Aref SJ, Singh G, van Rosendael AR, et al. Determinants of in-hospital mortality after percutaneous coronary intervention: a machine learning approach. J Am Heart Association. 2019;8:5. e011160.

    Google Scholar 

  57. Al’Aref SJ, Singh G, Choi JW, et al. A boosted ensemble algorithm for determination of plaque stability in high-risk patients on coronary CTA. J Am Coll Cardiology: Cardiovasc Imaging. 2020;13(10):2162–73.

    Article  Google Scholar 

  58. Aljarboa S, Shah M, Kerr D. Asia Pacific Decision Sciences Institute,. Perceptions of the adoption of clinical decision support systems in the Saudi healthcare sector. In: Blake J, Miah SJ, Houghton L, Kerr D (eds). Proc. 24th Asia-Pacific Decision Science Institute International Conference, pp. 40–53; 2019.

  59. Borracci RA, Higa CC, Ciambrone G, Gambarte J. Treatment of individual predictors with neural network algorithms improves global registry of acute coronary events score discrimination. Arch De Cardiolog´ıa De M´exico. 2021;91(1):58–65. https://doiorg.publicaciones.saludcastillayleon.es/10.24875/ACM.20000011.

    Google Scholar 

  60. Catho G, et al. Factors determining the adherence to antimicrobial guidelines and the adoption of computerised decision support systems by physicians: a qualitative study in three European hospitals. Int J Med Inf. 2020;141:104233.

    Article  Google Scholar 

  61. Dogan MV, Beach S, Simons R, Lendasse A, Penaluna B, Philibert R. Blood-based biomarkers for predicting the risk for 4ve-year incident coronary heart disease in the Framingham Heart Study via machine learning. Genes. 2018;9:12.

    Article  Google Scholar 

  62. Fan X et al. Utilization of self-diagnosis health chatbots in real-world settings: case study. J Med Internet Res. 2021;23:e19928.

  63. Golpour P, Ghayour-Mobarhan M, Saki A, et al. Comparison of support vector machine, na¨ıve bayes and logistic regression for assessing the necessity for coronary angiography. Int J Environ Res Public Health. 2020;17(18):6449–50.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Horsfall HL, et al. Attitudes of the surgical team toward artificial intelligence in neurosurgery: International 2-stage cross-sectional survey. World Neurosurg. 2021;146:e724–30.

    Article  Google Scholar 

  65. Hu D, Dong W, Lu X, Duan H, He K, Huang Z. Evidential MACE prediction of acute coronary syndrome using electronic health records. BMC Med Inf Decis Mak. 2019;19:S2.

    Google Scholar 

  66. Jauk S, et al. Technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: a mixed-methods study. J Med Syst. 2021;45:48.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Joloudari JH, Hassannataj Joloudari E, Saadatfar H, et al. Coronary artery disease diagnosis; ranking the signi4cant features using a random trees model. Int J Environ Res Public Health. 2020;17(3):731.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Kanagasundaram NS, et al. Computerized clinical decision support for the early recognition and management of acute kidney injury: a qualitative evaluation of end-user experience. Clin Kidney J. 2016;9:57–62.

    Article  PubMed  Google Scholar 

  69. Kayvanpour E, Gi WT, Sedaghat-Hamedani F, et al. MicroRNA neural networks improve diagnosis of acute coronary syndrome (ACS). J Mol Cell Cardiol. 2021;151:155–62.

    Article  CAS  PubMed  Google Scholar 

  70. Khong PCB, Hoi SY, Holroyd E, Wang W. Nurses’ clinical decision making on adopting a wound clinical decision support system. Comput Inf Nurs. 2015;33:295–305.

    Article  Google Scholar 

  71. Kim JK, Kang S. Neural network-based coronary heart disease risk prediction using feature correlation analysis. J Healthc Eng. 2017;13. https://doiorg.publicaciones.saludcastillayleon.es/10.1155/2017/2780501.

  72. Kitzmiller RR, et al. Diffusing an innovation: clinician perceptions of continuous predictive analytics monitoring in intensive care. Appl Clin Inf. 2019;10:295–306.

    Article  Google Scholar 

  73. Krittanawong C, Virk HUH, Kumar A, et al. Machine learning and deep learning to predict mortality in patients with spontaneous coronary artery dissection. Scienti*c Rep. 2021;11:1.

    Google Scholar 

  74. Li D, Xiong G, Zeng H, Zhou Q, Jiang J, Guo X. Machine learning-aided risk strati4cation system for the prediction of coronary artery disease. Int J Cardiol. 2021;326:30–4.

    Article  PubMed  Google Scholar 

  75. Liu X, Jiang J, Wei L, et al. Prediction of all-cause mortality in coronary artery disease patients with atrial 4brillation based on machine learning models. BMC Cardiovasc Disord. 2021;21(499):1–12. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12872-021-02314-w.

    Article  CAS  Google Scholar 

  76. Love SM, et al. Palpable breast lump triage by minimally trained operators in Mexico using computer-assisted diagnosis and low-cost ultrasound. J Glob Oncol. 2018. https://doiorg.publicaciones.saludcastillayleon.es/10.1200/JGO.17.00222.

    Article  PubMed  PubMed Central  Google Scholar 

  77. McBride KE, Steffens D, Duncan K, Bannon PG, Solomon MJ. Knowledge and attitudes of theatre staff prior to the implementation of robotic-assisted surgery in the public sector. PLoS ONE. 2019;14:e0213840.

  78. Mehta N, Harish V, Bilimoria K, et al. Knowledge and attitudes on artificial intelligence in healthcare: a provincial survey study of medical students. MedEd Publish. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.15694/mep.2021.000075.1.

    Article  Google Scholar 

  79. Morgenstern JD, Rosella LC, Daley MJ, Goel V, Schünemann HJ, Piggott T. AI’s gonna have an impact on everything in society, so it has to have an impact on public health: a fundamental qualitative descriptive study of the implications of artificial intelligence for public health. BMC Public Health. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12889-020-10030-x.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Motwani M, Dey D, Berman DS, et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis. Eur Heart J. 2017;38(7):500–7.

    PubMed  Google Scholar 

  81. Betriana F, Tanioka T, Osaka K, Kawai C, Yasuhara Y, Locsin RC. Improving the delivery of palliative care through predictive modeling and healthcare informatics. J Am Med Inf Assoc. 2021a;28:1065–73.

    Article  Google Scholar 

  82. Naushad SM, Hussain T, Indumathi B, Samreen K, Alrokayan SA, Kutala VK. Machine learning algorithm-based risk prediction model of coronary artery disease. Mol Biol Rep. 2018;45(5):901–10.

    Article  CAS  PubMed  Google Scholar 

  83. Nydert P, Vég A, Bastholm-Rahmner P, Lindemalm S. Pediatricians’ understanding and experiences of an electronic clinical-decision-support-system. Online J Public Health Inf. 2017;9:e200.

  84. Omar A, Ellenius J, Lindemalm S. Evaluation of electronic prescribing decision support system at a tertiary care pediatric hospital: the user acceptance perspective. Stud Health Technol Inf. 2017;234:256–61.

    Google Scholar 

  85. Orlenko A, Kofink D, Lyytik¨ainen LP, et al. Model selection for metabolomics: Predicting diagnosis of coronary artery disease using automated machine learning. Bioinformatics. 2020;36(6):1772–8.

    Article  CAS  PubMed  Google Scholar 

  86. Panicker RO, Sabu MK. Factors influencing the adoption of computerized medical diagnosing system for tuberculosis. Int J Inf Technol. 2020;12:503–12.

    Google Scholar 

  87. Petitgand C, Motulsky A, Denis J-L, Régis C. Investigating the barriers to physician adoption of an artificial intelligence-based decision support system in emergency care: an interpretative qualitative study. Digital personalized health and medicine. Amsterdam. The Netherlands: IOS; 2020. pp. 1001–5.

    Google Scholar 

  88. Pieszko K. Predicting long-term mortality after acute coronary syndrome using machine learning techniques and hematological markers. Disease Markers. 2019;2019:9.

  89. Ploug T, Sundby A, Moeslund TB, Holm S. Population preferences for performance and explainability of artificial intelligence in health care: choice-based conjoint survey. J Med Internet Res. 2021;e26611. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/26611.

  90. Polero LD. A machine learning algorithm for risk prediction of acute coronary syndrome (angina). Revista Argentina De Cardiolog´ıa. 2020;88:9–13.

    Google Scholar 

  91. Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C. Implementation of artificial intelligencebased clinical decision support to reduce hospital readmissions at a regional hospital. Appl Clin Inf. 2020;11:570–7.

    Article  Google Scholar 

  92. Sarwar S, Dent A, Faust K, Richer M, Djuric U, Ommeren RV, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. Npj Digit Med. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41746-019-0106-0.

    Article  Google Scholar 

  93. Scheetz J, Koca D, McGuinness M, Holloway E, Tan Z, Zhu Z, et al. Real-world artificial intelligence-based opportunistic screening for diabetic retinopathy in endocrinology and indigenous healthcare settings in Australia. Sci Rep. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41598-021-94178-5.

    Article  PubMed  PubMed Central  Google Scholar 

  94. Schuh C, de Bruin JS, Seeling W. Clinical decision support systems at the Vienna General Hospital using Arden Syntax: design, implementation, and integration. Artif Intell Med. 2018;92:24–33.

    Article  PubMed  Google Scholar 

  95. Sherazi SWA, Jeong YJ, Jae MH, Bae JW, Lee JY. A machine learning–based 1-year mortality prediction model after hospital discharge for clinical patients with acute coronary syndrome. Health Inf J. 2020;26(2):1289–304.

    Article  Google Scholar 

  96. Sujan M, White S, Habli I, Reynolds N. Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare. SSRN Electron J. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.2139/ssrn.4000675.

    Article  Google Scholar 

  97. Terry AL, Kueper JK, Beleno R, Brown JB, Cejic S, Dang J, et al. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Med Inf Decis Mak. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12911-022-01984-6.

    Article  Google Scholar 

  98. Tscholl DW, Weiss M, Handschin L, Spahn DR, Nöthiger CB. User perceptions of avatar-based patient monitoring: a mixed qualitative and quantitative study. BMC Anesthesiol. 2018;18:188.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Ugarte-Gil C, et al. Implementing a socio-technical system for computer-aided tuberculosis diagnosis in Peru: a field trial among health professionals in resource-constraint settings. Health Inf J. 2020;26:2762–75.

    Article  Google Scholar 

  100. Van der Zander QEW, van der Ende-van Loon MCM, Janssen JMM, Winkens B, van der Sommen F, Masclee AAM, et al. Artificial intelligence in (gastrointestinal) healthcare: patients’ and physicians’ perspectives. Sci Rep. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41598-022-20958-2.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Visram S, Leyden D, Annesley O, et al. Engaging children and young people on the potential role of artificial intelligence in medicine. Pediatr Res. 2023;93:440–4. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41390-022-02053-4.

    Article  PubMed  Google Scholar 

  102. Wang D et al. Brilliant AI Doctor in rural clinics: challenges in AI-powered clinical decision support system deployment. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; 2021, pp. 1–18.

  103. Xu H, Li P, Yang Z, Liu X, Wang Z, Yan W, He M, Chu W, She Y, Li Y, et al. Construction and application of a medical-grade wireless monitoring system for physiological signals at general wards. J Med Syst. 2020;44:1–15.

    Article  Google Scholar 

  104. Zhai H, et al. Radiation oncologists’ perceptions of adopting an artificial intelligence-assisted contouring technology: model development and questionnaire study. J Med Internet Res. 2021;23:1–16.

    Article  Google Scholar 

  105. Zhang H, Wang X, Liu C, et al. Detection of coronary artery disease using multi-modal feature fusion and hybrid feature selection. Physiol Meas. 2020;41(11):115007.

    Article  Google Scholar 

  106. Zhou N, et al. Concordance study between IBM watson for oncology and clinical practice for patients with cancer in China. Oncologist. 2019;24:812–9.

    Article  PubMed  Google Scholar 

  107. Zhou LY, Yin W, Wang J, et al. A novel laboratory-based model to predict the presence of obstructive coronary artery disease comparison to coronary artery disease consortium ½ score, duke clinical score and diamond-forrester score in China. Int Heart J. 2020;61(3):437–46.

    Article  CAS  PubMed  Google Scholar 

  108. Alumran A, et al. Utilization of an electronic triage system by emergency department nurses. J Multidiscip Healthc. 2020;13:339–44.

    Article  PubMed  PubMed Central  Google Scholar 

  109. Ayatollahi H, Gholamhosseini L, Salehi M. Predicting coronary artery disease: a comparison between two data mining algorithms. BMC Public Health. 2019;19(1):448. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12889-019-6721-5.

    Article  PubMed  PubMed Central  Google Scholar 

  110. Baskaran L. Machine learning insight into the role of imaging and clinical variables for the prediction of obstructive coronary artery disease and revascularization: an exploratory analysis of the CONSERVE study. PLoS ONE. 2020;15:6. e0233791.

    Article  Google Scholar 

  111. Betriana F, Tanioka T, Osaka K, Kawai C, Yasuhara Y, Locsin RC. Interactions between healthcare robots and older people in Japan: a qualitative descriptive analysis study. Jpn J Nurs Sci. 2021;18:e12409.

  112. Bouzid Z, Faramand Z, Gregg RE, et al. In search of an optimal subset of ecg features to augment the diagnosis of acute coronary syndrome at the emergency department. J Am Heart Association. 2021;10:3. e017871.

    Article  Google Scholar 

  113. Davari Dolatabadi A, Khadem SEZ, Asl BM. Automated diagnosis of coronary artery disease (CAD) patients using optimized SVM. Comput Methods Programs Biomed. 2017;138:117–26.

    Article  PubMed  Google Scholar 

  114. Du Z, Yang Y, Zheng J, et al. Accurate prediction of coronary heart disease for patients with hypertension from electronic health records with big data and machine-learning methods: model development and performance evaluation. JMIR Med Inf. 2020;8:7. e17257.

    Google Scholar 

  115. Gonçalves LS, Amaro MLM, Romero ALM, Schamne FK, Fressatto JL, Bezerra CW. Implementation of an artificial intelligence algorithm for sepsis detection. Rev Bras Enferm. 2020;73:e20180421.

  116. Isbanner S, Pauline O, Steel D, Wilcock S, Carter S. The adoption of artificial intelligence in health care and social services in Australia: findings from a methodologically innovative national survey of values and attitudes (the AVA-AI study). J Med Internet Res. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/37611.

    Article  PubMed  PubMed Central  Google Scholar 

  117. Lee EK, Atallah HY, Wright MD, Post ET, Thomas CIV, Wu DT, Haley LL. Transforming hospital emergency department workflow and patient care. Interfaces. 2015;45:58–82.

    Article  Google Scholar 

  118. Liberati EG, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci. 2017;12:1–13.

    Article  Google Scholar 

  119. Petersson L, Larsson I, Nygren JM, Nilsen P, Neher M, Reed JE, et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12913-022-08215-8.

    Article  PubMed  PubMed Central  Google Scholar 

  120. Prakash A, Das S. Intelligent conversational agents in mental healthcare ser­vices: a thematic analysis of user perceptions. Pac Asia J Assoc Inf Syst. 2020;12(2):1–34. https://doiorg.publicaciones.saludcastillayleon.es/10.17705/1pais.1201.

  121. Pumplun L, Fecho M, Wahl N, Peters F, Buxmann P. Adoption of machine learning systems for medical diagnostics in clinics: qualitative interview study. J Med Internet Res. 2021;23:e29301.

  122. Sendak MP, Ratliff W, Sarro D, Alderton E, Futoma J, Gao M. Real-world integration of a sepsis deep learning technology into routine clinical care: implementation study. JMIR Med Inf. 2020;8:e15182.

  123. Wittal CG, Hammer D, Klein F, Rittchen J. Perception and knowledge of artificial intelligence in healthcare, therapy and diagnostics: A population-representative survey. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1101/2022.12.01.22282960

  124. Zheng B, et al. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv Res. 2021;21:1067.

    Article  PubMed  PubMed Central  Google Scholar 

  125. Blanco N, et al. Health care worker perceptions toward computerized clinical decision support tools for Clostridium difficile infection reduction: a qualitative study at 2 hospitals. Am J Infect Control. 2018;46:1160–6.

    Article  PubMed  Google Scholar 

  126. Elahi C, et al. An attitude survey and assessment of the feasibility, acceptability, and usability of a traumatic brain injury decision support tool in Uganda. World Neurosurg. 2020;139:495–504.

    Article  PubMed  Google Scholar 

  127. Fan W, Liu J, Zhu S, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res. 2020;294:567–92.

    Article  Google Scholar 

  128. Garzon-Chavez D et al. Adapting for the COVID-19 pandemic in Ecuador, a characterization of hospital strategies and patients. PLoS ONE. 2021;16:e0251295.

  129. Grau LE, Weiss J, O’Leary TK, Camenga D, Bernstein SL. Electronic decision support for treatment of hospitalized smokers: a qualitative analysis of physicians’ knowledge, attitudes, and practices. Drug Alcohol Depend. 2019;194:296–301.

    Article  PubMed  Google Scholar 

  130. McCoy A, Das R. Reducing patient mortality, length of stay and readmissions through machine learning-based sepsis prediction in the emergency department, intensive care unit and hospital floor units. BMJ Open Qual. 2017;6:e000158.

  131. O’Leary P, Carroll N, Richardson I. The practitioner’s perspective on clinical pathway support systems. In IEEE International Conference on Healthcare Informatics. 2014;194–201.

  132. Van der Heijden AA, Abramoff MD, Verbraak F, van Hecke MV, Liem A, Nijpels G. Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the hoorn diabetes care system. Acta Ophthalmol. 2018;96:63–8.

    Article  PubMed  Google Scholar 

  133. MacPherson P et al. Computer-aided X-ray screening for tuberculosis and HIV testing among adults with cough in Malawi (the PROSPECT study): a randomised trial and cost-effectiveness analysis. PLoS Med. 2021;18:e1003752.

  134. Hitti E, Hadid D, Melki J, Kaddoura R, Alameddine M. Mobile device use among emergency department healthcare professionals: prevalence, utilisation and attitudes. Sci Rep. 2021. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41598-021-81278-5.

    Article  PubMed  PubMed Central  Google Scholar 

  135. Arakpogun EO, Elsahn Z, Olan F, Elsahn F. Artificial intelligence in Africa: challenges and opportunities. In: Hamdan A, Hassanien AE, Razzaque A, Alareeni B (eds). Entrepreneurship, innovation and strategy, marketing, operations and systems. Cham: Switzerland; 2022, pp. 375–88. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/978-3-030-62796-6_22.

    Chapter  Google Scholar 

  136. Leenes RE, Palmerini E, Koops B, Bertolini A, Salvini P, Lucivero F. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law Innov Technol; 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/17579961.2017.1304921.

    Article  Google Scholar 

  137. World Health Organisation. Addressing challenges to ethics and governance. In WHO consultation towards the development of guidance on ethics and governance of artificial intelligence for health: Meeting report. Geneva, Switzerland: World Health Organisation. 2021b; http://www.jstor.org/stable/resrep35680.10. Accessed 21 Jul 2023.

  138. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vascular Neurol. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/svn-2017-000101.

    Article  Google Scholar 

  139. Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Rep Med. 2022;2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.xcrm.2022.100860.

  140. Fei Z, Ryeznik Y, Sverdlov O, Tan CW, Wong WK. An overview of healthcare data analytics with applications to the COVID-19 pandemic. IEEE Trans Big Data. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TBDATA.2021.3103458.

    Article  Google Scholar 

  141. Meehan AJ, Lewis SJ, Fazel S, Fusar-Poli P, Steyerberg EW, Stahl D, Danese A. Clinical prediction models in psychiatry: a systematic review of two decades of progress and challenges. Mol Psychiatry. 2022;27:27000–2708. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41380.022-01528-4.

    Article  Google Scholar 

  142. Krumholz HM. In the US, patient data privacy is an illusion. BMJ (Clinical Res ed). 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj.p1225.

  143. Rentmeester C. Heeding humanity in an age of electronic health records: Heidegger, Levinas, and healthcare. Nurs Philos. 2018. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/nup.12214.

    Article  PubMed  Google Scholar 

  144. Silva W, Sacramento CQ, Silva E, Garcia AC, Ferreira SB. Health information, human factors and privacy issues in mobile health applications. Hawaii Int Conf Syst Sci. 2020. https://doiorg.publicaciones.saludcastillayleon.es/10.24251/hicss.2020.420.

    Article  Google Scholar 

  145. Checkround AM, Hawrilenko M, Loho H, Bondar J, Gueorguieva R, Hasan A, Kambeitz J, Corlett PR, Koutsouleris N, Krumholz HM, Krystal JH, Paulus M. Illusory generalizability of clinical prediction models. Science. 2024;383(6679):164–7. https://doiorg.publicaciones.saludcastillayleon.es/10.1126/science.adg8538.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We are grateful to Lieutenant Commander (Ghana Navy) Candice FLEISCHER-DJOLETO of 37 Military Hospital, Ghana Armed Forces Medical Services, for proofreading the draft manuscript.

Funding

No author received funding for a part or the whole of the study.

Author information

Authors and Affiliations

Authors

Contributions

NNB, EWA, CES, SM, and VKD Conceptualised and Designed the Review Protocols. EWA, VKD, CES, FSA, RVK, IST, LAA, SM, OUL, and NNB Conducted Data Collection and Acquisition. EWA, VKD, CES, FSA, IST, LAA, SM, OUL, RVK, and NNB carried out extensive data processing and management. EWA, CES, NNB developed the initial manuscript. All authors edited and considerably reviewed the manuscript, proofread for intellectual content and consented to its publication.

Corresponding author

Correspondence to Nkosi Nkosi Botha.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors consented to publish this paper.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Botha, N.N., Segbedzi, C.E., Dumahasi, V.K. et al. Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety. Arch Public Health 82, 188 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13690-024-01414-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13690-024-01414-1

Keywords