Ser | Author(s) | Purpose | Approach | Continent | Potential Threats | Conclusion |
---|---|---|---|---|---|---|
1 | Al’Aref et al. (2019) | Using the New York Percutaneous Coronary Intervention Reporting System to elucidate the determinants of in-hospital mortality in patients undergoing percutaneous coronary intervention | Quantitative, 479,804 patients | North America | 1. May not apply to all cohorts of patients 2. No clear ethical and legal regime | High accuracy predictive potential for in-hospital mortality in patients undergoing percutaneous coronary intervention |
2 | Al’Aref et al. (2020) | Culprit Lesion (CL) precursors among Acute Coronary Syndrome (ACS) patients based on computed tomography-based plaque characteristics | Quantitative, 468 patient | North America | 1. Potential for bias 2. Potential for medical errors | A boosted ensemble algorithm can be used to predict culprit lesion from non-culprit lesion precursors on coronary Computed Tomography Angiography (CTA) |
3 | Aljarboa and Miah (2021) | Perceptions about Clinical Decision Support Systems (CDSS) uptake among healthcare sectors | Qualitative, 54 healthcare workers | Asia | 1. Potential for error 2. Uncertainty about the future effects | Patients’ confidence and diagnostic accuracy were new determinants of CDSS acceptability that emerged in this study |
4 | Alumran et al. (2020) | Electronic Canadian Triage and Acute Scale (E-CTAS) utilisation in emergency department | Quantitative, 71 respondents | Asia | No clear regulations in place | Years of nursing experience moderated the utilisation of E-CTAS |
5 | Ayatollahi et al. (2019) | Positive Predictive Value (PPV) of Cardiovascular Disease using Artificial Intelligence Neural Network (ANN) and Support Vector Machine (SVM) algorithm and their distinction in terms of predicting Cardiovascular Disease | Quantitative, 1,324 | Asia | No clear legal regime | The SVM algorithm presented higher accuracy and better performance than the ANN model and was characterised by higher power and sensitivity |
6 | Baskaran (2020) | Using machine learning to gain insight into the relative importance of variables to predict obstructive Coronary Artery Disease (CAD) | Quantitative, 719 patients | North America | No clear legal regime | Machine learning model showed BMI to be an important variable, although it is currently not included in most risk scores |
7 | Betriana (2021a) | Access to palliative care (PC) by integrating predictive model into a comprehensive clinical framework | Quantitative, 68,349 in-patient encounters | North America | Inadequate legal and ethical systems | A machine learning model can effectively predict the need for in-patient palliative care consult and has been successfully integrated into practice to refer new patients to palliative care |
8 | Betriana (2021b) | Interactions between healthcare robots and older persons | Qualitative | Asia | 1. Patients’ safety and security concerns 2. Legal and ethical concerns | Interaction between healthcare robots and older people may improve quality of care |
9 | Blanco et al. (2018) | Barriers and facilitators related to uptake of Computerised Clinical Decision Support (CCDS) tools as part of a Clostridium Difficile Infection (CDI) reduction bundle | Qualitative, 34 participants | North America | Perceived loss of autonomy and clinical judgement | Findings shaped the development of Clostridium Difficile Infection reduction bundle |
10 | Borracci et al. (2021) | Application of Neural Network (NN) algorithm-based models to improve the Global Registry of Acute Coronary Events (GRACE) score performance to predict in-hospital mortality and acute Coronary Syndrome | Quantitative, 1,255 admitted patients | South America | 1. No clear regulations 2. Faulty data can lead to inaccurate results | Treatment of individual predictors of GRACE score with NN algorithms improved accuracy and discrimination power in all models |
11 | Bouzid et al. (2021) | Consecutive patients evaluated for suspected Acute Coronary Syndrome | Quantitative, 554 respondents | North America | 1. Could produce bias result 2. No clear regulations | A subset of novel electrocardiograph features predictive of acute coronary syndrome with a fully interpretable model highly adaptable to clinical decision support application |
12 | Catho et al. (2020) | Adherence to antimicrobial prescribing guidelines and Computerised Decision Support Systems (CDSSs) adoption | Qualitative, 29 participants | Europe | 1. Time-consuming 2. Reduce clinicians’ critical thinking and professional autonomy and raise new medico-legal issues 3. Effective CDSSs will require specific features | Features that could improve adoption include friendliness, ergonomics, transparency of the decision-making process and workflow |
13 | Davari Dolatabadi et al. (2017) | Automatic diagnosis of normal and Coronary Artery Disease conditions using Heart Rate Variability (HRV) signal extracted from electrocardiogram | Quantitative | Asia | Inadequate legal regime | Methods based on the feature extraction of the biomedical signals are an appropriate approach to predict the health situation of patients |
14 | Dogan et al. (2018) | Examined whether similar machine learning approaches could be used to develop a similar panel to predict Coronary Heart Disease (CHD) | Quantitative, 1,180 and 524 training and test sets, respectively | North America | Results are inconclusive | The AI tool is more sensitive than conventional risk-factor based approaches, and performs well in both males and females |
15 | Du et al. (2020) | Using high-precision Coronary Heart Disease (CHD) prediction model through big data and machine-learning | Quantitative, 42,676 patients | Asia | No clear ethical and legal regime | Accurate risk-prediction of coronary heart disease from electronic health records is possible given a sufficiently large population of training data |
16 | Elahi et al. (2020) | Traumatic Brain Injury (TBI) prognostic models | Mixed, 25 questionnaire and interview | Africa | Poor internet connectivity may undermine its utility | Addressed unmet needs to determine feasibility of TBI clinical decision support systems in low-resource settings |
17 | Fan et al. (2020) | Integration of unified theory of user acceptance of technology and trust theory for exploring the adoption of Artificial Intelligence-Based Medical Diagnosis Support System (AIMDSS) | Quantitative, 191 respondents (healthcare workers) | Asia | Needs specialised skills to operate | The empirical examination demonstrates a high predictive power of this proposed model in explaining AIMDSS utilisation |
18 | Fan et al. (2021) | Real-world utilisation of AI health chatbot for primary care self-diagnosis | Mixed, 16,519 users | Asia | Sceptical about its utility in patient care | Although the AI tool is perceived convenient in improving patient care, issues and barriers exist |
19 | Fritsch et al. (2022) | Investigate perception about artificial intelligence in healthcare | Quantitative survey, 452 patients and their companions | Europe | 1. Unpredictable errors/mistakes 2. Cyberattack and implications for data privacy 3. Interferes with patient-doctor relationship 4. Diagnosis should be subject to physician assessment 5. Endanger data privacy and protection 6. Limited penetration | Patients and their companions are open to AI usage in healthcare and see it as a positive development |
20 | Garzon-Chavez et al. (2021) | Utilisation of AI-assisted computed tomography screening tool for COVID-19 patient at triage | Quantitative, 75 chest CTs | South America | May conflict with diagnostic decisions of clinicians | There were differences in laboratory parameters between cases at the intensive care and non-intensive care units |
21 | Golpour et al. (2020) | Compare support vector machine, naïve Bayes and logistic regressions to determine the diagnostic factors that can predict the need for Coronary Angiography | Quantitative, 1,187 candidates | Asia | Depends on accurate and large data set | Gender, age and fasting blood sugar found to be the most important factors that predict the result of coronary angiography |
22 | Gonçalves (2020) | Nurses’ experiences with technological tools to support the early detection of sepsis | Qualitative | South America | Inadequate legal regime | Nurses in the technology incorporation process enable a rapid decision-making in the identification of sepsis |
23 | Grau et al. (2019) | Using Electronic Support Tools and Orders for Prevention of Smoking (E-STOPS) | Qualitative, 21 participants | North America | Clinical judgement is limited | Improvements in provider training and feedback as well as the timing and content of the electronic tools may increase their use by physicians |
24 | Horsfall et al. (2021) | Attitudes of surgeons and the wider surgical team toward the role of artificial intelligence in neurosurgery | Mixed, 33 participants and 100 respondents | North America | Not very effective during post-operative management and follow-ups | Artificial intelligence widely accepted as a useful tool in neurosurgery |
25 | Hu et al. (2019) | Using Rough Set Theory (RST) and Dempster-Shafer Theory (DST) of evidence to remedy Major Adverse Cardiac Event (MACE) prediction | Quantitative, 2,930 acute coronary syndrome patient samples | Asia | Needs regular, large, and accurate data | The model achieved better performance for the problem of MACE prediction when compared with the single models |
26 | Isbanner et al. (2022) | Assess and compare public judgments about AI use in healthcare | Quantitative survey, 4448 respondents (general public) | Australia | 1. Breach of ethical and social values 2. Interferes with patient-doctor contact | AI systems should augment rather than replace humans in the provision of healthcare |
27 | Jauk et al. (2021) | Machine learning-based application for predicting the risk of delirium for in-patients | Mixed, 47 questionnaire and 15 expert group (clinicians) | Europe | Not effective in detecting delirium at an early stage | In order to improve quality and safety in healthcare, computerised decision support should predict actionable events and be highly accepted by users |
28 | Joloudari et al. (2020) | Integrated method using random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) | Quantitative | Asia | Needs regular, accurate, and large volumes of data | The random tree model yielded the highest accuracy rate than others |
29 | Kanagasundaram et al. (2016) | Using in-patient Acute Kidney Injury (AKI) Computerised Clinical Decision Support (CCDS) | Qualitative, 24 Interviews | Australia | 1. Disrupts workflow 2. Seen as a hindrance to work | Systems intruding on workflow, particularly involving complex interactions, may be unsustainable even if there has been a positive impact on care |
30 | Kayvanpour et al. (2021) | Genome-wide miRNA levels in a prospective cohort of patients with clinically suspected Acute Coronary Syndromes by applying an in Silico Neural Network | Quantitative, 2,930 samples | Europe | Needs large and accurate data to produce reliable outcomes | The approach opens the possibility to include multi-modal data points to further increase precision and performance classification of other differential diagnoses |
31 | Khong et al. (2015) | Adoption of wound clinical decision support system as an evidence-based technology by nurses | Qualitative, 14 registered nurses | Asia | 1. Can lead to loss of essential clinical skills 2. Prone to errors | Improved knowledge of nurses’ about their decision to interact with the computer environment in a Singapore context |
32 | Kim et al. (2017) | Neural Network (NN) based prediction of Coronary Heart Disease risk using feature correlation analysis (NN-FCA) | Quantitative, 4,146 subjects | Asia | Depends on accurate and large data set | The model was better than Framingham risk score (FRS) in terms of coronary heart diseases risk prediction |
33 | Kitzmiller et al. (2019) | Neonatal intensive care unit clinician perceptions of a continuous predictive analytics technology | Qualitative, 22 clinicians | North America | Accuracy of in doubt | Combination of physical location as well as lack of integration into workflow or procedures of using data in care decision-making may have delayed clinicians from routinely paying attention to the data |
34 | Krittanawong et al. (2021) | Deep neural network to predict in-hospital mortality in patients with Spontaneous Coronary Artery Dissection (SCAD) | Quantitative, 375 SCAD patients | North America | Relies on large volume, accurate, and regular patient data | The deep neural network model was associated with higher predictive accuracy and discriminative power than logistic regression or ML models for identification of patients with ACS due to SCAD prone to early mortality |
35 | Lee (2015) | Emergency department decision support system that couples machine learning, simulation, and optimisation to address improvement goals | Mixed | North America | Inadequate regulations | General improvement in patient care at the emergency care department |
36 | Liberati et al. (2017) | Barriers and facilitators to the uptake of an evidence-based Computerised Decision Support Systems (CDSS) | Qualitative, 30 participants (healthcare workers) | Europe | Undermines professional autonomy and exposes practitioners to potential medico-legal issues | Attitudes of healthcare workers towards scientific evidence and guidelines, quality of inter-disciplinary relationships, and organisational ethos of transparency and accountability need to be considered when exploring facility readiness to implement AI tools |
37 | Li et al. (2021) | Machine learning-aided risk stratification system to simplify the procedure of the diagnosis of Coronary Artery Disease | Quantitative, 5,819 patients | Asia | Its efficacy depends on data availability and accuracy | The model could be useful in risk stratification of prediction for the coronary artery disease |
38 | Liu et al. (2021) | Machine learning models for predicting mortality in Coronary Artery Disease (CAD) patients with Atrial Fibrillation (AF) | Quantitative | Asia | Faulty data can result in errors | Combining the performance of all aspects of the models, the regularisation logistic regression model was recommended to be used in clinical practice |
39 | Love et al. (2018) | Using AI-based Computer-Assisted Diagnosis (CADx) in training healthcare workers | Quantitative, 32 palpable breasts lumps examined by 3 non-radiologists | North America | 1. Could return negative diagnosis 2. High cost of care 3. Patients are sceptical about final decisions | A portable ultrasound system with CADx software can be successfully used by first-level healthcare workers to triage palpable breast lumps |
40 | MacPherson et al. (2021) | Costs and yield from systematic HIV-TB screening, including computer-aided digital chest X-Ray | Quantitative, 1462 residents | Africa | High cost of healthcare | Digital chest X-Ray computer-aided digital with universal HIV screening significantly increased the timelines and completeness of HIV and TB diagnosis |
41 | McBride et al. (2019) | Knowledge and attitudes of operating theatre staff towards robotic-assisted surgery programme | Quantitative, 164 respondents (clinicians) | Australia | 1. Sceptical about its efficacy in patient care 2. Increased cost of care | Clinicians embraced the application of the robotic-assisted surgery programme in the theatre |
42 | McCoy (2017) | Machine learning-based sepsis prediction algorithm to identify patients with sepsis earlier | Quantitative, 1328 respondents | North America | Disruptiveness (alert fatigue) | The machine learning-based sepsis prediction algorithm improved patient outcomes |
43 | Mehta et al. (2021) | Assess knowledge, perceptions, and preferences about AI use in medical education | Quantitative survey, 321 medical students | North America | 1. Create new challenges to healthcare equity 2. Create new ethical and social challenges 3. Limited in the provision of empathetic, psychiatric, personal, and counselling care | Optimistic about AI’s capabilities to carry out a variety of healthcare functions, including clinical and administrative Sceptical about AI utility in personal counselling and empathetic care |
44 | Morgenstern et al. (2021) | Determine the impacts of artificial intelligence (AI) on public health practice | Qualitative (inter-continental interviews), 15 experts in public health and AI | North America and Asia | 1. Inadequate experts in AI use 2. Poor healthcare data quality for AI training and learning 3. Introduce bias 4. Escalate healthcare inequity 5. Poor AI regulation | Experts are cautiously optimistic AI’s potential to improve diagnosis and disease surveillance. However, perceived substantial barriers like inadequate regulation exist |
45 | Motwani et al. (2017) | Traditional prognostic risk assessment in patients undergoing non-invasive imaging | Quantitative, 10,030 patients | North America | Efficacy depends on accurate, large, and regular data | Machine learning combining clinical and coronary computed tomographic angiography data was found to predict 5-year all-cause mortality significantly better than existing models |
46 | Naushad et al. (2018) | Coronary artery disease risk and percentage stenosis prediction models using ensemble machine learning algorithms, multifactor dimensionality reduction and recursive partitioning | Quantitative, 648 subjects | Asia | Needs accurate data to function well | The model exhibited higher predictability both in terms of disease prediction and stenosis prediction |
47 | Nydert et al. (2017) | Clinical Decision Support System (CDSS) among paediatricians | Qualitative, 17 clinicians | Europe | 1. Risk of overreliance on system 2. Cannot function independently | Generally, the system is considered very useful to patient drug management |
48 | O’Leary et al. (2014) | Support systems in healthcare and the concept of decision support for clinical pathways | Mixed, 19 Clinicians | Europe | 1. Lack of autonomy by clinicians 2. Complication of the care process | The success of these systems depend on other factors outside of itself |
49 | Omar et al. (2017) | Paediatrician’s acceptance, perception and use of Electronic Prescribing Decision Support System (EPDSS) using extended Technology Acceptance Model (TAM2) | Qualitative | North America | 1. Not user friendly 2. Does not cancel medications not needed | Although paediatricians are positive of the usefulness of EPDSS, there are problems with acceptance due to usability issues of the system |
50 | Orlenko et al. (2020) | Tree-based Pipeline Optimisation Tool (TPOT) to predict angiographic diagnoses of Coronary Artery Disease (CAD) | Quantitative | Europe | Efficacy depends on accurate and large data set | Phenotypic profile that distinguishes non-obstructive coronary artery disease patients from non-coronary artery disease patients is associated with higher precision |
51 | Panicker and Sabu (2020) | Factors influencing the adoption of Computer-Assisted Medical diagnosing (CDM) systems for TB | Qualitative, 18 healthcare workers | Asia | Prone to medical errors | Human, technological, and organisational characteristics influence the adoption of CMD system for TB |
52 | Petersson et al. (2022) | Explore perceived challenges about AI use in healthcare | Qualitative, 26 healthcare leaders | Europe | 1. Liability and legal issues 2. Setting standards and complying with quality requirements 3. Cost of operating AI 4. Acceptance of AI by professionals | Healthcare leaders highlighted several implementation challenges in relation to AI within and beyond the healthcare system in general and their organisations in particular |
53 | Petitgand et al. (2020) | AI-based Decision Support System (DSS) in the emergency department | Qualitative, 20 Clinicians and AI developers | North America | System is prone to errors | The study points to the importance of considering interconnections between technical, human and organisational factors to better grasp the unique challenges raised by AI systems in healthcare |
54 | Pieszko (2019) | Risk assessment tool based on easily obtained features, including haematological indices and inflammation markers | Quantitative, 5,053 patients | Europe | Efficacy depends on large and accurate patient data | The machine-learning model can provide long-term predictions of accuracy comparable or superior to well-validated risk scores |
55 | Ploug et al. (2021) | Examine preferences for the performance and explainability of AI decision making in health care | Quantitative survey, 1027 respondents (general public) | Europe | 1. Unintentional harm 2. Potential for bias and discrimination 3. Consent issues | Physicians must take ultimately responsibility for diagnostics and treatment planning, AI decision support should be explainable, and AI system must be tested for discrimination |
56 | Polero (2020) | Random forest and elastic net algorithms to improve acute coronary syndrome risk prediction tools | Quantitative, 20,078 patients’ data | South America | Performance depends on large and accurate data | Random forest significantly outperformed exiting models and can perform at par with previously developed scoring metrics |
57 | Prakash and Das (2021) | Factors influencing the uptake and use of intelligent conversational agents in mental healthcare | Qualitative | Asia | Inadequate legal regimes to protect patients | AI tools have proven efficacious in improving the health outcomes of patients. However, there are inadequate legal regimes to guide usage |
58 | Pumplun et al. (2021) | Factors that influence the adoption of machine learning systems for medical diagnosis in clinics | Qualitative, 22 healthcare workers | Europe | Unclear regulations to protect patients | Many clinics still face major problems in the application of machine learning systems for medical diagnostics |
59 | Richardson et al. (2021) | Examining patient views of diverse applications of AI in healthcare | Qualitative (FGDs), 87 patients | North America | 1. Breach of privacy 2. Discrimination and bias 3. Hacking and manipulation 4. Trust issues 5. Data integrity 6. Unknown harm 7. Breach of choice and autonomy 8. Lack of clear opportunity to challenge AI decisions 9. Cost of care 10. Inconsistent with insurance coverage 11. Over-dependence on AI leading to loss of skills and competencies | Addressing patient concerns relating to AI applications in healthcare is essential for effective clinical implementation |
60 | Romero-Brufau (2020) | Reduce unplanned hospital readmissions through the use of artificial intelligence-based clinical decision support | Quantitative, 2,460 respondents | North America | Heavily dependent on quality and regular data | Six months following a successful application of intervention, readmissions rates decreased by 25% |
61 | Sarwar et al. (2019) | Examine perspectives on AI implementation in clinical practice | Quantitative survey, 487 pathologist (Inter-continental) | North America and Europe | 1. New medical-legal issues 2. Fear of errors 3. Inadequate skills and knowledge AI 4. Erodes skills and competencies of clinicians | Most respondents envision eventual rollout of AI-tools to complement and not replace physicians in healthcare |
62 | Scheetz et al. (2021) | Investigate the diagnostic performance, feasibility, and end-user experiences of AI assisted diabetic retinopathy | Mixed, 236 patients, 8 HCWs | Australia | 1. Informed consent issues 2. Unintended errors | AI in healthcare well-accepted by patients and clinicians |
63 | Schuh (2018) | Creation and modification of Arden-Syntax-based Clinical Decision Support Systems (CDSSs) | Quantitative | Australia | 1. Inadequate legal regime 2. Poor data interoperability | Despite its high utility in patient care, inconsistent electronic data, lack of social acceptance among healthcare personnel, and weak legislative issues remain |
64 | Sendak (2020) | Integration of a deep learning sepsis detection and management platform, sepsis watch, into routine clinical care | Quantitative | North America | 1. High cost of care 2. Inadequate regulations | Although there is no playbook for integrating deep learning into clinical care, learning from the sepsis watch integration can inform efforts to develop machines learning technologies at other healthcare delivery systems |
65 | Sherazi et al. (2020) | Propose a machine learning-based on 1-year mortality prediction model after discharge in clinical patients with acute coronary syndrome | Quantitative, 10,813 subjects | Asia | May produce medical error due to faulty data | The model would be beneficial for prediction and early detection of major adverse cardiovascular events in acute coronary syndrome patients |
66 | Sujan et al. (2022) | Explore views about AI in healthcare | Qualitative, 26 patients, hospital staff, technology developers, and regulators | Europe | 1. Unforeseen errors 2. Inadequate ethical and legal regime 3. Inability to respond to socio-culturally diversities 4. Inadequate situation awareness between AI and humans 5. Inadequate humanity | Safety and assurance of healthcare AI need to be based on a systems approach that expands the current technology-centric focus |
67 | Terry et al. (2022) | Explore views about the use of AI tools in healthcare | Qualitative, 14 primary healthcare and digital health stakeholders | North America | 1. Current data system inadequate to support AI everywhere 2. Inadequate manpower and resources to rollout AI 3. Loss of control in decision making 4. Unpredictable errors and mistakes 5. Ethical, legal, and social implications | Use of AI in primary healthcare may have a positive impact, but many factors need to be considered regarding its implementation |
68 | Tscholl et al.(2018) | Perceptions about patient monitoring technology (visual patient) for transforming numerical and waveform data into a virtual model | Mixed, 128 interviews (anesthesiologists) with 38 online surveys | Europe | Not very effective in patient monitoring | The new avatar-based technology improves the turnaround time in patient care |
69 | Ugarte-Gil et al. (2020) | A socio-technical system to implement a computer-aided diagnosis | Mixed, Twelve clinicians | South America | 1. May interfere with workflow 2. May be insensitive to local context 3. Incompatible with existing technology | Several infrastructure and technological challenges impaired the effective implementation of mHealth tool, irrespective of its diagnosis accuracy |
70 | van der Heijden (2018) | Incorporation of IDx-diabetes retinopathy (IDx-DR 2.0) in clinical workflow, to detect retinopathy in persons with type 2 diabetes | Quantitative, 1415 respondents | Europe | Few errors recorded | High predictive validity recorded for IDx-DR 2.0 device |
71 | van der Zander et al. (2022) | Investigate perspectives about AI use in healthcare | Quantitative survey, 492 (377 patients and 80 clinicians) | Europe | 1. Potential loss of personal contact 2. Unintended errors | Both patients and physicians hold positive perspectives towards AI in healthcare |
72 | Visram et al. (2023) | Investigate attitudes towards AI and its future applications in medicine and healthcare | Qualitative (FGD), 21 young persons | Europe | 1. Lack of humanity 2. Inadequate regulations 3. Lack of trust 4. Lack of empathy | Children and young people to be included in developing AI. This requires an enabling environment for human-centred AI involving children and young people |
73 | Wang et al. (2021) | AI-powered clinical decision support systems in clinical decision-making scenarios | Qualitative, 22 physicians | Asia | 1. May conflict with diagnostic decisions by clinicians 2. May interfere with workflow 3. Subject to diagnostic errors 4. May be insensitive to local context 5. There is suspicion for AI tools 6. Incompatible with existing technology | Despite difficulties, there is a strong and positive expectation about the role of AI- clinical decision support systems in the future |
74 | Wittal et al. (2022) | Survey public perception and knowledge of AI use in healthcare, therapy, and diagnosis | Quantitative survey, 2001 respondents (general public) | Europe | 1. Privacy breaches 2. Data integrity issues 3. Lack of a clear legal framework | Need to improve education and perception of medical AI applications by increasing awareness, highlighting the potentials, and ensuring compliance with guidelines and regulations to handle data protection |
75 | Xu (2020) | Medical-grade wireless monitoring system based on wearable and artificial intelligence technology | Quantitative | Asia | 1. Increased cost of care 2. Doubt about its suitability for all patient categories | The AI tool can provide reliable psychological monitoring for patients in general wards and has the potential to generate more personalised pathophysiological information related to disease diagnosis and treatment |
76 | Zhai et al. (2021) | Develop and test a model for investigating the factors that drive radiation oncologists’ acceptance of AI contouring technology | Quantitative, 307 respondents | Asia | Medical errors | Clinicians had very high perceptions about AI-assisted technology for radiation contouring |
77 | Zhang et al. (2020) | Provide Optimal Detection Models for suspected Coronary Artery Disease detection | Quantitative, 62 patients | Asia | Depends on large and accurate data | Multi-modal features fusion and hybrid features selection can obtain more effective information for coronary artery disease detection and provide a reference for physicians to diagnosis coronary artery disease patients |
78 | Zheng et al. (2021) | Clinicians’ and other professional technicians’ familiarity with, attitudes towards, and concerns about AI in ophthalmology | Quantitative, 562 Clinicians (291) and other technicians (271) | Asia | Ethical concerns | AI tools are relevant in ophthalmology and would help improve patient health outcomes |
79 | Zhou et al. (2019) | Examine concordance between the treatment recommendation proposed by Watson for Oncology and actual clinical decisions by oncologists in a cancer centre | Quantitative, 362 cancer patients | Asia | Insensitive to local context | There is concordance between AI tools and human clinician decisions |
80 | Zhou et al. (2020) | Develop and internally validate a Laboratory-Based Model with data from a Chinese cohort of inpatients with suspected Stable Chest Pain | Quantitative, 8,963 patients | Asia | Needs very accurate and large volume of data | The present model provided a large net benefit compared with coronary artery diseases consortium ½ score (CAD1/2), Duke clinical score, and Forrester score |