Will Humans Trust an Ai Physician? More Easily Than you Think!
Authors: Amina Khalpey, PhD, Parker Wilson, BS, Zain Khalpey, MD, PhD, FACS
We at Khalpey AI Lab are working to advance and improve the use of AI technologies in healthcare. We believe strongly that artificial intelligence is an important tool to advance the healthcare industry and will bring it to new heights in the future. Starting with simple managerial tasks and working toward fully functional AI-based physicians is the path we envision for AI in healthcare. We see AI as an exciting and useful development to improve how we care for our patients and to drive down healthcare costs.
The field of artificial intelligence (AI) has made significant strides over the last few decades, and it is increasingly being utilized in various domains, including healthcare. AI-based healthcare solutions are becoming widespread, and AI is now being used to aid in disease diagnosis, drug development, and even personalized medical care. One area where AI can potentially revolutionize healthcare is through the use of AI-based physicians. An AI-based physician is a medical decision-making system that analyzes data from patients, and using machine learning algorithms and evidence-based decision tools to make diagnoses and recommend treatments. The question is, will humans accept an AI physician? In this post, we will explore the potential benefits of AI-based physicians, the potential concerns people may have, and how these concerns can be addressed.
Benefits Of AI-based Physicians
One notable advantage of AI-based physicians is their ability to analyze vast amounts of data quickly and accurately. Medical data, such as electronic health records, lab results, and imaging studies, are often too much for a busy human physician to analyze in a reasonable amount of time. This can lead to errors or delays in diagnosis and treatment. However, an AI-based physician can analyze these data sets much faster and with greater accuracy, potentially leading to more prompt diagnosis and treatment plans.
Another advantage of AI-based physicians is that they can continuously learn and improve. An AI-based physician can analyze the results of past diagnoses and treatments, and adjust its algorithms to improve its accuracy and effectiveness over time. This continuous learning can lead to better patient outcomes and potentially reduce the risk of medical errors. The AI-based physician will never make the same mistake twice, if given ample data and learning opportunities.
AI-based physicians can also potentially reduce the cost of healthcare. Human physicians require years of training and experience, and their services can be expensive. In contrast, AI-based physicians can be developed once and used repeatedly, potentially reducing the cost of healthcare services. Additionally, AI-based physicians could potentially reduce the cost of medical malpractice insurance, as they may be less likely to make errors compared to human physicians.
Reviewing The Challenges of AI-based Physicians
Despite these possible benefits of AI-based physicians, some people may have concerns about using them. One significant concern is the loss of human connection. Patients may feel uncomfortable discussing their medical issues with a machine, and they may feel that they are not receiving personalized care or they may feel that their privacy is being violated on a potentially unsecured server. Additionally, patients may be concerned that an AI-based physician will not be able to provide emotional support and subtle understanding of the human condition, which is often critical in healthcare settings.
Another concern is the potential for bias in the algorithms used by AI-based physicians. The data used to train an AI-based physician may be biased, leading to incorrect diagnoses and potentially harmful treatment recommendations. The algorithms themselves may even be biased, potentially leading to unfair or unequal treatment for certain patient groups. This would require constant maintenance and feedback of the algorithms to ensure progressive learning and improvement over time, which would mean human intervention on behalf of the AI-based physician.
As with all AI technologies, there is concern that AI-based physicians will replace human physicians. While AI-based physicians have the potential to provide faster, more accurate, and cost-effective medical diagnoses and treatment recommendations, they cannot provide the same level of emotional support as human physicians. Additionally, there will often be cases where human judgment and intuition is necessary, such as in cases where a patient’s medical history is unclear or when dealing with complex medical conditions. We believe there should be low concern for the replacement of human physicians with AI-based ones.
Finally, there is a reasonable concern regarding the potential for privacy violations when using AI-based physicians. Medical data is highly sensitive, and patients may be concerned that their data will be misused or shared without their consent. The AI-based physician would need to be operated on an extremely secure server with active surveillance measures to prevent protected health information (PHI) from being shared. This is a large hurdle to development and utilization of an AI-based physician. If only limited data sets were available, the utility of this tool would be diminished and it could only be used for simple diagnoses and advice.
Addressing AI-Physician Concerns
While concerns about AI-based physicians are understandable, many of these concerns can be addressed. One way to address the loss of the human connection is to use AI-based physicians in conjunction with human physicians. An AI-based physician can analyze medical data and make initial diagnoses and treatment recommendations, while a human physician can provide emotional support and personalized care. This hybrid approach can potentially provide the benefits of both AI-based and human physicians.
To address concerns about bias, it is essential to ensure that the data used to train AI-based physicians is diverse and representative of the patient population. Additionally, algorithms should be tested to ensure that they are not biased and do not unfairly target certain patient groups. In curbing possible biases, the AI-based physicians should be transparent, explainable and reproducible. Patients and healthcare providers may be hesitant to use AI-based physicians if they cannot understand how the system arrived at a particular diagnosis or treatment recommendation. This is particularly important when dealing with complex medical conditions where a patient’s life may be at stake. Ensuring that AI-based physicians are transparent and explainable can increase trust and confidence while decreasing bias in these systems, leading to greater acceptance.
Finally, patients should have control over their medical data and should be informed of how their data is being used and who has access to it. Privacy is a major obstacle to broadly using AI-based physicians. The servers on which these tools would be hosted will need to be extremely secure and private to allow patients to securely share sensitive information and retrieve results that they alone can access. These tools could easily be misused by foreign actors, including by not limited to identity thieves and insurance companies. It is essential to prioritize security when creating an AI-based physician.
To start it is reasonable, however, to use algorithms and tools that are simple and singular in focus. For example, an AI-based physician could be created to give a patient’s prognosis of cardiovascular disease using the ASCVD diagnostic tool. This is a simple tool that is evidence-based and would not need AI to analyze large, private data sets. Moving into the future though, AI-based physicians would have to be integrated into electronic medical records (EMR) to retrieve large patient data sets if they were to have broader applicability.
The Future of Medicine is AI
AI-based physicians have the potential to revolutionize healthcare by providing fast, accurate, and cost-effective medical diagnoses and treatment recommendations. While there are concerns about the use of AI-based physicians, these concerns can be addressed through a hybrid approach that combines the strengths of both AI and human physicians. Ensuring diversity in data sets and testing algorithms for bias can also help to address concerns about fairness and equality. There should be low concern for replacement of human physicians with AI-based physicians as often medical diagnosis can be nuanced and relies on human judgment and intuition. Finally, privacy is paramount when creating the AI-based physician which forms a large obstacle to wide adoption. Patients should be informed and have control over their medical data to address concerns about privacy violations. Additionally, AI-based physicians will need to be securely hosted and used to prevent foreign actors from misusing PHI.
The acceptance of AI-based physicians will depend on how well these concerns are addressed, as well as on the effectiveness of these systems in delivering secure, high-quality healthcare. One significant challenge will be to convince patients and healthcare providers that AI-based physicians can provide the same level of care as human physicians. While AI-based physicians may not be able to provide the same level of emotional support or intuition as human physicians, they could provide faster and more accurate diagnoses, potentially leading to better patient outcomes.
We believe that education and awareness will be essential to increasing acceptance of AI-based physicians. Patients and providers may be more willing to use these tools if they understand how they work and the potential benefits they offer. Additionally, research and development efforts should continue to focus on improving the security, accuracy and effectiveness of AI-based physicians.
Federal regulation could help to curb biases and privacy violations while using AI-based physicians. Laws, restrictions and requirements can provide assurance to patients and healthcare providers that AI-based physicians will not jeopardize their sensitive health information. In conclusion, while there are potential concerns about the use of AI-based physicians, these concerns can be addressed through careful consideration and development of these systems, leading to potentially significant improvements in healthcare.
Amato, F., López, A., Peña-Méndez, E. M., Vaňhara, P., & Hampl, A. (2013). Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine, 11(2), 47-58. Bhatt, J. (2019). The Promise and Perils of Artificial Intelligence in Healthcare. Harvard Public Health Review.
Greenhalgh, T., Wherton, J., Papoutsi, C., Lynch, J., Hughes, G., A’Court, C., & Hinder, S. (2016). Analysing the role of complexity in explaining the fortunes of technology programmes: Empirical application of the NASSS framework. BMC medicine, 14(1), 1-18.
Khairat, S., Marc, D., Crosby, W., & Al Sanousi, A. (2018). Reasons for physicians not adopting clinical decision support systems: Critical analysis. JMIR medical informatics, 6(2), e24.
Kim, J., & Kim, H. (2018). Explainable AI for diagnosing and treating hypertension: Patient and physician perspectives. Journal of medical systems, 42(8), 150.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
Shickel, B., Tighe, P. J., Bihorac, A., & Rashidi, P. (2018). Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589-1604.
Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56.
Wyatt, J. C., & Liu, J. L. (2019). Clinical decision support systems for medical diagnosis. BMJ, 365, l1659.