Artificial Intelligence in Digital Health: Issues and Dimensions of Ethical Concerns Inteligencia Artificial en Salud Digital: Cuestiones y dimensiones de los aspectos éticas

Artificial intelligence (AI) is transforming the healthcare system at a breakneck pace by improving digital healthcare services, research, and performance, fueled by the combination of big data and strong machine learning algorithms. As a result, AI applications are being employed in digital healthcare domains of which some where previously regarded as only done by human expertise. However, despite AI's benefits in digital healthcare services, issues and ethical concerns need to be addressed. Using mapping review methodology, a taxonomy of issues and ethical concerns surrounding the employment of AI in healthcare is presented and discussed. Moreover, policy recommendations and future research directions are presented.


Introduction
Artificial Intelligence (AI) is bringing a fundamental change in digital healthcare services, thanks to the growing availability, accessibility of data and the rapid advancement of advanced analytics [1,2].The significant growth of digital data, the advancement of computational power bolstered by innovation in hardware, including graphics processing units and machine learning (ML) techniques, widely applied using deep learning (DL), are all creating an indelible mark in the healthcare domain [3].This has attracted attention, and a considerable number of research on the effective usage of ML in big volumes of health data [4][5][6].
Moreover, the increased AI use is sought to lower the significantly higher rate of human error in healthcare services, which is likely to result to injury or even death [7].For instance, scientists are researching the effective use of DL to imitate the neural networks in the human brain and are being tested in hospitals by businesses like Google to see if these machines can help with decision-making by anticipating what will occur to a patient [8].However, several ethical concerns arise in the healthcare domain when AI systems perform the same tasks as humans [9][10][11][12].For instance, consider a scenario when the robot commits a calculating error and prescribes the incorrect dose of medicine, resulting in catastrophic harm or death.Additional queries also arise in the same vein, such as what if, on the other hand, AI machines result in new kinds of medical errors?And who will be found liable if they occur?[14,15].In such a setting, some scholars argue that policies and rules for ethical AI in health care should are inevitable to these rising challenges [13].
In the meantime, policies and ethical guidelines for healthcare services that use AI and its implementations lag behind the pace of AI advancements [16].As a result, existing AI-based technologies and applications must be examined and discussed to address ethical concerns.Thus, in this paper, the existing issues and ethical concerns in the employment of AI in digital health are presented and discussed.The paper is structured as follows: Section 2 presents the overview of AI employment in digital health.Section 3 presents the methodology used in this study.Section 4 discusses the existing issues in employing AI in digital healthcare services.Section 5 discusses and presents the ethical dimensions of AI concerns in digital healthcare along with the taxonomy.Finally, the paper is concluded in section 5 with conclusions, policy recommendations and future research directions.

AI Employment in Digital Health
In digital healthcare (DH) services, AI can be employed under virtual and physical categories.The virtual portion includes viewpoints from electronic and system perspectives such as Electronic Health Record (HER) systems, Natural Language Processing (NLP), expert systems, to neural network-based treatment decision assistance [17].The physical section covers themes such as robotic surgery assistants, intelligent prosthetics for disabled persons, and senior care [18].
Robots are becoming more collaborative with people, and they are easier to accomodate by guiding them through a task, and they are also becoming smarter as more AI functionalities are integrated into their 'brains' [18].The same advancements in intelligence that we have seen in other AI fields will soon be relevant to physical healthcare robots.For example, surgical robots allow surgeons to see better, make more precise and least invasive incisions, repair wounds, and so on [19].Medical experts can now care for a higher number of patients by using AI.AI tools can assist them in making better diagnostic judgments, improving treatment outcomes, and reducing medical errors.AI could also help with HR difficulties like recruitment and selection of potential healthcare workers [20].
The most crucial question, meanwhile, remains an open question: "Are we willing to give life and death choices to AI?" "Can computers definitively determine whether or not. the treatment given to a patient is adequate?"Addressing the concerns above is part of the ongoing research and may be challenging due to the multiple hurdles and difficulties that AI and robotics may entail.Nevertheless, one thing is certain: AI and robotics will continue to play a significant role in DH services.
In the meantime, there has been a significant growing amount of data available for assessing healthcare activity and biological data in recent years.With the rising amount of data, DL is being applied in figuring disease patterns such as cancer in early stages, thanks to the ongoing advances in processing power [21].In addition, consumer wearables and other medical equipment, blended with AI, are being used to identify and detect possible life episodes in initial heart disease, allowing doctors and other carers to supervise better and detect possibly serious incidents sooner, more fixable phase.Thus, pattern recognition is being used to identify people at risk of getting an illnessor seeing one worsendue to lifestyle, environmental, genetic, or other variables.EMR databases store information about previous hospital visits, diagnoses and treatments, lab results, medical photographs, and clinical narratives.These datasets can be used to create prediction models to assist physicians with diagnoses and treatment decision-making.As AI techniques improve, it will be feasible to extract a wide range of data, including disease-related impacts and connections between past and future medical events.Thus, even though AI applications for EMRs are presently restricted, the potential for employing huge datasets to discover new patterns and forecast health consequences is immense [23].
Furthermore, one of the more modern uses of AI in healthcare is drug research and development.There is the possibility to drastically reduce both the time to market for new pharmaceuticals and their prices by directing the current breakthroughs in AI to expedite the drug discovery and drug repurposing processes.In a manner healthcare services" published between November 2021 and November 2021.Titles and abstracts were checked, and full-text papers were reviewed if they were appropriate.Quantitative analysis and a major focus on AI and healthcare services, especially on issues and ethical considerations, were the inclusion requirements.Non-English and non-AI studies were both ruled out.
Significant studies were additionally snowballed by citation searching in Google Scholar, and reference lists included in publications were inspected.Next, the identified references were evaluated by a group of three reviewers.To ensure uniformity, the first 100 references were examined by all three reviewers.Finally, any questions were answered by talking with the other two reviewers.After the title and abstract screening, the references that could meet the inclusion criteria were given a second look, and data was retrieved for use in the review.Citations that did not match the inclusion criteria were removed from the study.For mapping reasons, references were divided into sets as per AI issues in healthcare and AI ethical concerns in digital healthcare services.Data from each abstract was used to complete data extraction.If an abstract was not accessible, minimal details from the title were collected for the mapping review, understanding that the full text would be retrieved if the mapping review was included in the intended systematic review.

AI Issues in Digital Health
The advancement of artificial intelligence in healthcare is fraught with issues and challenges.Errors in AI systems, for example, put patients at risk of harm.Similarly, using a patient's data for AI research puts the patient's privacy in danger.In this section, issues of employing AI in digital health are discussed.

Training Data
To correctly train neural networks and guarantee an effective neural network algorithm in AI, huge volumes of data are required (citation).Systems that are robust and accurate cannot be designed with too little data [24,25].As a result, overfitting1 can occur, resulting in data that does not generalize well enough to new data.
Overfitting happens when data is trained too well, lacking the ability to fit the assessment data and resulting in poor results.Furthermore, several governments may find it challenging to use AI since they lack the necessary data to train neural networks.Deep learning also relies significantly on labelled data to ensure that the algorithm's output is high quality.As a result, these datasets necessitate a large number of experts and data analysts.For some rare health disorders, open-source data is available; however, data on more infectious ailments, on the other hand, is limited [26].
Presently AI systems are heavily reliant on their training data; as a result, these algorithms' accuracy is limited by the information in the datasets on which they are trained, which means they can't escape biases and errors in the training data.Furthermore, due to differences in patient demography, physician preferences, equipment and resources, and health policies, medical information obtained in clinical practice often varies among entities and contexts [27].This brings us to the limitation of data usage according to the pattern and context of its collection.Thus, for instance, can healthcare professionals in developing countries, for instance, depend on AI decision-making capabilities that have grown from data from developed countries based on their group of the population that is distinct from the developing countries population?Moreover, much data is unstructured and unorganized regarding their final form and gathering method, with missing data regularly occurring [28].As a result, the vast majority of clinical data are inadequate for AI 1 Overfitting happens when data is trained too well, lacking the ability to fit the assessment data and resulting in poor results (citation).algorithms to use effectively.In the meantime, producing and annotating this much medical data takes a lot of time and effort.This has led to few available databases on which its hidden biases in such information may not be apparent.As a result, researchers collecting large amounts of medical data to construct AI systems may rely on whichever data is available, even if it is subject to numerous selection biases.

Blackbox and Explainable AI (XAI)
The "black box" hinders the ability to see how the algorithms work within AI-based healthcare systems.The black box makes decisions based on many connections, making it impossible for the human mind to understand how and on what basis the decision was made [29].As a result, it calls into question the integrity of the data and its modus operand.This means that neither doctors nor patients can understand how the AI system arrived at its choice [30].Therefore, transparency is required in AI-based healthcare systems to perform proper and effective system evaluations and audits.Thus, an AI-based healthcare system must be auditable, whereby transparency should include correct information on the technology's premises, constraints, operating protocols, data attributes (including data collecting, processing, and labelling methods), and algorithmic modelling.In addition, experts have emphasized that explainability is required when an AI offers health recommendations, particularly to uncover biases, in the case of "black-box" algorithms [31].This has led to the rise of interest in the AI sub-discipline called "Explainable AI"2 (XAI).The XAI extends to how AI machines can know the context and environment in which they work and construct appropriate explainable models that allow AI to define important considerations across time Figure 1. 2 Explainable AI (XAI) is a suite of techniques and frameworks that can assist users comprehend and understand machine learning predictions.Vol.XAI can improve the usability of AI-based digital healthcare services by assisting end-users in trusting that the AI makes smart decisions which is essential in digital healthcare services.The goal of XAI in this manner is to convey what has been undertaken, data, to reveal the knowledge on which the actions are based and how the AI come to make a decision [32].This argument is currently under discussion in the academic arena, whereby some scholars believe that what counts is that the AI is correct, at least in the context of diagnosis, rather than how it makes its conclusion [33].Positive outcomes of randomized clinical trials could be used to establish the safety and usefulness of "black box" health AI applications, comparable to how pharmaceuticals are handled.

AI Malfunctions
Even with direct guided robotic surgery, robotic faults during surgery are still occurring [34].For example, consider robotically assisted surgical devices (RASDs), allowing surgeons to manipulate tiny cutting instruments rather than traditional scalpels.If a surgeon's hand slips with a scalpel and a key tendon is sliced, our instinct is that the surgeon is to blame.But what if the surgeon is employing a RASD that is touted as having a unique "tendon avoidance subroutine," comparable to the warnings that cars currently emit when sensors detect a potential collision?Can the wounded patient sue the RASD vendor if the tendon sensors fail and the warning does not sound before an incorrect cut is made?Or was it only the doctor who relied on it?Thus, surgical errors made by unsupervised robotic surgical devices will certainly be one of the largest legal difficulties in the future, even though there are still significant challenges in direct-control robotic surgery [35].Of course, a human doctor might have averted the surgical blunders produced by autonomous robotic surgical devices, but these systems may outgrow the need for people in the future.This raises the following questions: should we keep improving these technologies until the surgery error rate caused by robots reaches zero, and should we continue to allow patient damage due to human error until the system is perfected?Should we promote autonomous robotic surgery after obtaining satisfactory outcomes at the expense of a few patients?To answer these problems, further education in the field of AI technology is required.Concerns have been raised about how machine errors could be spotted again.The proponents of AI will give a specific example of AI in airline autopilot mode, which does not jeopardize pilot training but, in my viewpoint, is accountable for autopilot malfunctions that result in plane crashes.

AI vs Professional Skills
There is a growing concern in academia that the increasing reliance on algorithms will impair people's ability to think for themselves in the long run.As a result, AI is bound to gradually deprive our brains of mental effort and thinking as we become more accustomed to using it in everyday chores.Moreover, according to Vol. the researchers, high dependence on automation can erode professional abilities.Thus, as healthcare workers are extensively using AI it impede the development of doctors' abilities and clinical procedures [36].Thus, there is the potential deskilling of healthcare professionals due to increased independence in AI.However, one of the ways to tackle this is to employ the human-AI combinatory approach.For example, when it comes to cancer diagnosis, clinicians must be both sensitive and specific to avoid over-flagging of questionable tissue.While humans aren't very sensitive, algorithms are.Therefore, combining the two sets of talents might have a huge positive impact on healthcare.

Standardization of AI Algorithms
Several researchers throughout the world are developing AI algorithms in DH.Simultaneously, several governments and corporations invest much in AI research on DH.It's possible to wonder if AI research has already resulted in standardized algorithms for digital healthcare services [46].For AI in DH, standardization work is essential, helpful, and instructive.It represents both a vital lever for driving industrial innovation and the pinnacle of the competitive landscape.While AI-related supplies in DH are becoming more widely available in China, issues with insufficient levels of standardization are also emerging.
Currently, since each study's performance is presented using various approaches on diverse communities with distinct sample distributions and features, objective assessment of AI algorithms across research is difficult.
Therefore, algorithms must be compared on the same independent test set that is generalizable, using the same performance standards, to make fair comparisons for a particular domain, for instance, DH [47].Clinicians will struggle to determine which algorithm is most suitable to accomplish well for their patients if this isn't done.Every healthcare provider's curating of independent local test sets might be utilized to test the efficiency of several accessible algorithms in a sample group of their population.These separate test sets must be built using an unenriched representative group and intentionally unavailable data for training algorithms.Before actual testing, an additional local training dataset might be given to allow fine tweaking of algorithms.The rising availability of huge, accessible datasets will simplify comparison for academics, allowing research to compare their effectiveness cohesively.

Algorithmic Bias
While AI applications can eliminate human prejudice and mistake, the data used to train them might reflect and reinforce biases.Concerns have been voiced concerning the possibility of AI causing prejudice in ways that are concealed or do not correlate with protected by law criteria like gender, race, handicap, and age.In addition, the advantages of AI in healthcare may not be dispersed evenly [37].Where data is limited or difficult to obtain or render electronically, AI may perform less well.People with rare medical illnesses and disadvantaged in clinical trials and scientific data, such as Black, Asian, and minority ethnic communities, may be affected.Biased AI may, for example, lead to incorrect diagnosis and make medicines ineffectual for particular population groups, jeopardizing their safety in the health sector, where phenotype-and sometimes genotype-related data is involved [38].For example, consider AI-based clinical decision support (CDS) system that assists physicians in determining the optimum treatments for skin people with cancer.The algorithm, on the other hand, was primarily trained on Asian patients.
As a result, subgroups for which the dataset was underinclusive, including African Americans, may likely receive less accurate or even erroneous suggestions from AI software.High information availability and Vol.efforts to better collect data from minority communities and clearly describe which populaces the algorithm is or is not fit for may help resolve some of these biases.Nevertheless, there is still the issue of several algorithms being complex and opaque.As the public's trust grows, so makes the provision of information from a wider range of sources.For example, we already know that some diseases exhibit differently depending on the patient's ethnic origin.A quick illustration can be seen in an AI program developed to detect malignant moles.The AI will have been educated on a database primarily made up of white skin photos in its early phases, making it less likely to detect malignant patterns on darker skins.
Before artificial intelligence, medical datasets and trials had a longstanding experience of prejudice and underrepresentation of women and persons of diverse races and ethnicities.One way in which the results can be skewed is if the dataset utilized for machine learning does not include enough people of diverse sexes, races and ethnicities, or socioeconomic backgrounds.COVID-19's unequal impact on some racial and ethnic groups mirrors longstanding racial disparities in scientific science and access and serves as a harsh reflection of the need to reduce prejudice in developing health-research technology [39].

Privacy and Security
Data power AI.Machines would be unable to learn how to 'think' without it.This is why the privacy of patients' medical data is so important, and it has become a global corporate and government focus.
Nevertheless, due to the sheer sensitive nature of patients' medical information, the health industry is heavily vulnerable to cyber assaults.Many people consider data that is confidential and private to be used in AI applications in healthcare.The law regulates these.Other types of data, such as social media interactions and web search history, that aren't directly related to health state, could be used to provide information about the user's and others' health.Artificial intelligence (AI) could be used to identify cyber-attacks and safeguard healthcare information systems.Nevertheless, AI systems could be hacked to obtain access to sensitive information or inundated with phoney or biased information in ways that are difficult to discern [40].
One of the most pressing issues is integrating AI machine learning into clinical settings with informed permission and balancing patient privacy with AI effectiveness and safety [41].It also raises the question of when a practitioner must inform a patient that AI is being utilized.Moreover, there is a lack of public awareness of how patient data is used, and both doctors and patients want to learn more about it.Does this pose trust issues: can we trust an application to diagnose us superior to a doctor or on par with us?We must first understand why people are terrified of AI before we can create trust.Rather than rejecting them as Dullards, we should consider their concerns and make them a part of the solution.Values change from one country to the next, as well as from one corporation to the next.While face recognition technology is widely used in China, individuals in the West are wary about such surveillance.The gathering, use, assessment, and sharing of patient data has sparked widespread concern about privacy rights since a lack of privacy can harm an individual (for example, future discrimination based on one's health status) or cause a wrong (for example, affecting a person's dignity if sensitive health information is distributed or broadcast to others) [42].
Given that personal medical information is among the most sensitive data types, there are serious ethical questions about how access, management, and usage can alter over time as a self-improving Gets better.In creating regulations in this domain, a focus on patient consent would represent the essential ethical ideals.
Requirements for technologically assisted repeated informed consent for new information use, for instance, would serve to protect patients' privacy.Additionally, the right to withdraw data could be explicitly stated to complement with human contacts.The active research questions in this area includes: What influence will the digital transformation and growing use of AI have on the patient-doctor relationship?How can we take use AI in DH while keeping important values like safety, privacy, security, and trust in mind?

Liability and Full delegation of AI in Digital Health
Even if AI robots and algorithms are extremely sophisticated, mistakes can still happen.Humans, like the machines they train, are likely to make mistakes.The ongoing question is, who is liable for the AI mistakes?
The medical centre, the health practitioner, or the algorithm's manufacturer [50].If physicians can't be held responsible, should the AI system be made responsible?Many academics disagree, claiming that AI lacks the human characteristics required to make moral judgements based on empathy and semantic comprehension.
However, the fact that AI systems cannot currently be held responsible for their actions should not deter efforts to instil moral responsibility in them.These arguments need to be addressed.
Moreover, we may be unable to track how decisions are taken when AI algorithms are opaque -which they are in many circumstances.As a result, increased openness among participants must be enforced.Those that create or implement AI systems may face legal consequences, though the details of how that responsibility is regulated and enforced are still under debate.Another difficulty with liability is that AI systems are always advancing and developing, posing new challenges and unique circumstances, as the case of AI systems capable of generating new AI algorithms.So, what-or who-is to be held responsible when an AI system establishes a fully independent system?Another decision to be made seems to be whether accountability rules should incentivize practitioners to use AI to inform and verify their clinical judgment or to diverge from their judgment if an algorithm reaches an Vol. 3, No. 1, Mes Marzo-Agosto, 2022 ISSN: 2708-0935 Pág.81-108 https://revistas.ulasalle.edu.pe/innosoftFacultad de Ingeniería Universidad La Salle, Arequipa, Perú facin.innosoft@ulasalle.edu.pe97 unanticipated outcome.If healthcare practitioners are penalized for leaning on AI technology that turn out to be erroneous, they may only utilize the technology to corroborate their judgment.While this may protect them from legal guilt, it may inhibit AI from being used to its greatest potential: to augment rather than validate human judgment.

Digital Divide
The "digital gap," which refers to unequal access to, usage of, or impact of information and communication technology among various populations, is one obstacle that is argued to impede AI adoption.Even though the cost of digital technology is decreasing, access is still unequal.Researchers argued that two-tiered health care could be one of the significant long-term repercussions on the healthcare delivery system.Is it possible that a two-tier diagnostic service will arise, with only the richest people having access to human-led interpretation of AI in DH? Or, on the other hand, are only the wealthy granted access to a potentially greater machine-led analysis of test findings or imaging?
Training AI takes a lot of time and energy, as well as a lot of computer resources.Machine learning models are typically only run by wealthy countries and universities with substantial computational power.This becomes a barrier to AI and frontier technology democratization.

Collective Medical Mind
The so-called "collective medical mind" dilemma addresses the transfer of medical authority from human doctors to algorithms.The danger here is that AI systems used as decision support tools will eventually become central hubs in medical decision-making.In this situation, it's unclear how existing medical ethics concepts (consequentialism, beneficence and non -maleficence, and patient respect) can still be anticipated to play an important role in the patient-doctor interaction that they do now-or may be expected to have in the future.On the other hand, using AI-powered tools to mediate the doctor-patient connection can radically transform the doctor-patient interaction.AI may increase interpersonal distance among patients and their doctors, particularly as it permits distant care or communication via robotic assistants.The need to streamline patient care could motivate to adopt such tools, but the flip side is that the patient is becoming more and more isolated, which could have detrimental consequences for health outcomes.AI-based household platforms are subject to the same concerns.In theory, these technologies might be immensely valuable for providing better care to older people with reduced mobility, for example.They can, nevertheless, exacerbate social isolation.

Harmonized Ethical Framework
There is little to no relevant multilateral guideline on using AI for health in compliance with ethical principles and provisions currently available.The majority of countries lack legislation or regulations governing the use of AI in health care, and those that do exist may not be sufficient or specialized enough for this purpose.To generate trust in these technologies and prevent the proliferation of inconsistent norms, ethics guideline based on the common perspectives of the multiple agencies that create, utilize, or oversee such technologies is vital.
For the design and deployment of AI for global health, standardized ethics guidelines are required.Vol.

Conclusion
As AI becomes more integrated into work and personal life, it poses ethical risks replacing people with robots.These concerns are more pressing in the healthcare field, where decisions can mean life or death.The spread of AI may result in the delivery of health services in unregulated settings and by uncontrolled practitioners, posing issues for government healthcare regulation.Proper regulatory supervision mechanisms must be devised to guarantee that the private sector is directly answerable to individuals who can gain from AI goods and services and that decision-making and operations are open.
A fundamental problem for future AI governance will ensure that AI is designed and implemented ethically, transparent, and public-interest-compatible while reducing risks and promoting innovation in digital healthcare.Because AI technology draws inferences based on machine learning of the data collected, the decision-making process ignores the unique circumstances of individual patients, raising ethical, moral, and legal concerns.
As a result, it's important to discuss the rules and conventions that AI technology should follow, such as ethics, regulations, and personal beliefs, which govern society's behaviour.
AI is predicted to enable deeper ties between healthcare providers and patients in the long term by, for instance, compensating for AI's faults.As a result, the medical school curriculum should include AI-related learning and technology efficiently.Furthermore, because historical datasets of patients constrain medical AI, more precise Ai systems should be created.This progress necessitates the active cooperation of medical professionals at the start of the AI development process.
To tackle this problem, it is vital to cultivate a highly-trained professional team that can react quickly in the consumer-oriented healthcare industry by offering a variety of sophisticated technology learning possibilities, such as leveraging AI to cooperate with medical personnel.Furthermore, to produce new employment, we believe that new curriculums (for example, technology innovation and application, human-machine confluence, data analytics classes, human-machine exchange, cyber ethics and accountability, and so on) should be incorporated into medical school curricula.
The recommendations in this article are based on existing AI-based technology use, which may limit our comprehension of future technology's full potential.However, this study has suggested guidelines for effective usage and management of AI by reviewing the literature and real-world uses of AI systems in healthcare organizations.We anticipate that our research will spur additional rigorous theoretical and empirical research into the most effective use of AI systems to deliver the best possible treatment for patients and public health prevention.