Use of LLM in medicine : opportunities, challenges, and ethical considerations |
Автор: Paliukhovich N.F.,Moyseyonok N.S. |
15.12.2024 11:14 |
USE OF LLM IN MEDICINE : Opportunities, Challenges, and Ethical Considerations
Paliukhovich N.F., student Moyseyonok N.S., senior lecturer Belarusian State University, Minsk, Belarus
Abstract: This article examines the role of artificial intelligence (AI) and large language models (LLMs) in the future of problem solving and the changing face of global healthcare. It highlights persistent challenges in accessing healthcare, particularly in developing countries, and inequalities in healthcare delivery. It discusses the potential of AI to improve diagnostics and support clinical decision making. It addresses ethical considerations, risks of diagnostic errors, data privacy issues, and inequalities that may arise as AI is introduced into healthcare. Key words: healthcare, artificial intelligence (AI), large language models (LLMs), virtual health assistants, cognitive behavioral therapy (CBT), fitness technology, data privacy.
Аннотация: В данной статье рассматривается роль искусственного интеллекта (ИИ) и больших языковых моделей (LLMs) в будушем решении проблем и изменении облика глобального здравоохранения. Подчеркиваются устойчивые трудности с доступом к медицинским услугам, особенно в развивающихся странах, и неравенства в предоставлении медицинской помощи. Обсуждается потенциал ИИ в улучшении диагностики и поддержке принятия клинических решений. Затрагиваются этические аспекты, риски ошибок в диагностике, проблемы конфиденциальности данных и неравенства, которые могут возникнуть по мере внедрения ИИ в систему здравоохранения. Ключевые слова: здравоохранение, искусственный интеллект (ИИ), большие языковые модели (LLMs), виртуальные медицинские помощники, когнитивно-поведенческая терапия (КПТ), фитнес-технологии, конфиденциальность данных.
The global healthcare system faces numerous challenges that hinder the provision of quality health services to people around the world. According to WHO, in 2021, about 4.5 billion people, i.e. more than half of the world's population, have difficulties obtaining basic health services[1]. The situation is most difficult in developing countries. According to a report prepared by the Indian Ministry of Health in 2016, doctors named inadequate infrastructure, insufficient funding, irrational deployment of personnel and lack of funding as main problems of the systen of healthcare in rural areas[2]. Currently, 8 years later, the problems in the region remain the same. In rural India, the shortage of medical personnel is up to 80%.[3] However, it would be wrong to say that the problem concerns only developing countries. According to Russian Deputy Minister of Health Tatyana Semenova, the shortage of medical workers in the country is almost 100 thousand people[4]. Countries such as the United States have not bypassed the difficulties in the field of healthcare. According to the 2021 National Health Care Quality and Disparities Report, 40% of American Indians and Alaska Natives and 43% of Blacks receive worse quality health care than whites. The study also found that low-income families lack access to health insurance, lack of access to health services, and lack of timely access to health care compared to high-income families.[5] An equally important problem in medicine are medical errors. WHO reports that approximately 1 in 10 patients is harmed in healthcare, and more than 3 million deaths occur annually due to unsafe care. In low- and middle-income countries, up to 4 out of 100 people die from unsafe care.[1] Thus, we see certain opportunities for the implementation of LLM in the medical process. Moreover, there are already cases where, thanks to accessing a neural network, a patient received necessary medical care. For example, in the US, a chatbot helped to diagnose a rare disease in a boy who had been unable to be treated by 17 doctors for 3 years[7]. AI can also help in veterinary medicine. Twitter user @peakcooper told a story about a veterinarian who was unable to diagnose a disease in his dog. However, the owner provided the AI with blood tests and a description of the symptoms, and the neural network suggested its own diagnosis, which was confirmed by another veterinarian. Of course, it would be wrong to say that existing models have absolute accuracy in determining diagnoses. Researchers from Mass General Brigham in Boston, Massachusetts, studied the ability of an AI chatbot to correctly diagnose patients and manage treatment in primary care and emergency care settings. It turned out that the chatbot’s accuracy was about 72%, which is “roughly equivalent to the level of a medical school graduate.” It is also worth noting that this is not a specialized model, and it can be assumed that AI trained specifically to diagnose diseases will make mistakes much less often. [6] Of course, ChatGPT and OpenAI aren’t the only medical assistants out there. Startups Buoy Health and Ada Health offer users the ability to self-assess their symptoms and receive preliminary diagnoses and recommendations. Users enter their symptoms through an AI-powered chatbot, which asks additional questions for clarification. Based on the answers, Bui and Ada analyze the data and provide the user with possible diagnoses, as well as recommendations for further action, including recommendations for contacting a doctor. Such solutions allow for a more systematic approach to providing medical care to people. Another interesting example is the Woebot Health project. This is a virtual mental health assistant that uses cognitive behavioral therapy (CBT) to support users in managing their emotional state. Cognitive behavioral therapy is a psychotherapeutic approach that uses cognitive behavioral therapy methods to treat mental disorders, including anxiety, depression, panic attacks, phobias, obsessive-compulsive disorder, and body dysmorphic disorder. Woebot offers exercises, advice, and support in real time, which essentially makes it an accessible psychologist. Even at its current stage of development, AI is making progress, which is why it is likely to become an extremely useful tool for patients. We should also expect doctors to use electronic assistants. Integrating LLM into medical practice can significantly improve the quality of services provided by novice specialists. Most likely, it would be useful for more experienced doctors as well. Machine algorithms can take into account a large amount of data, including rare medical histories of other patients. This allows them to make correct assumptions that may initially seem unlikely. The introduction of such systems in developing countries will lead to a significant improvement in the extremely low quality of medical services. This view is supported by the study's author, Dr. Mark Succi, who emphasizes that 'LLMs, in general, have the potential to be an augmenting tool for the practice of medicine and support clinical decision-making with impressive accuracy'.[6] The development of technology has shown us the emergence of new smart systems. Almost everyone is familiar with fitness bracelets. Today advanced models has gone far beyond simply measuring heart rate and offer a wide range of functionality. They provide tracking various health indicators - blood oxygen levels, sleep quality, stress levels, and even an electrocardiogram. With the appearance of AI, it became easier to analyze the data collected by the fitness bracelet and to provide recommendations. Machine algorithms can offer individual workout and physical activity plans based on user data, such as fitness level and goals (weight loss, muscle gain, or simply maintaining health). Detecting negative trends in the body will lead to a recommendation to visit a doctor. Perhaps we will be able to talk about personal assistants based on AI that will help not only identify and treat diseases, but also strengthen the immune system and the body as a whole in the near future. The described prospects seem promising, but further steps to integrate LLM into healthcare will certainly face challenges. It is to be expected that some of the problems are not yet known, but some can be described right now. The most obvious problem is incorrect diagnoses and the prescription of useless or harmful treatments. As mentioned earlier, algorithms currently show a relatively high level of accuracy in making diagnoses. There is reason to hope that the accuracy will improve over time. However, it is difficult to give guarantees, since it is unclear how definitely AI makes decisions. It also remains a debatable issue who should be responsible for AI medical errors, and if anyone should be at all, since using a chatbot instead of a full-fledged appointment with a doctor is a patient’s decision. Another problem can be formulated here - the possible abuse of medical assistants and, as a result, untimely treatment with undesirable consequences. Another issue is the inequality of opportunities, mentioned earlier. In more developed countries, the implementation of medical AI will be more active. This will be facilitated by trust in technology and a higher level of citizen responsibility. Interacting with chat-bots is also expected to be more difficult for older people. Many of this group have not had the opportunity to actively interact with digital technologies in their lives, which can cause mistrust of the system and reluctance to learn how to use it. As a result, changes to the usual system of medical care will cause additional stress. The topic of privacy and personal data protection deserves special mention.. According to the Cyberhaven resource, there is already a problem with the fact that users often trust ChatGPT with confidential work information [8], but in the case of medical assistant the scale of private data processed by the model will be many times larger. A language model for medical purposes, simply by its concept, will have access to confidential and very detailed information of thousands of patients. The question arises as to who can gain unauthorized access to it, and how to anonymize and protect this data. In addition, confidential data can be used in training and tuning the model. The privacy issue can lead to many people refusing to use AI due to fear of their data being leaked. Without a solution, problems with data protection laws may arise. In conclusion, the integration of artificial intelligence into medical practice can improve the quality and make medical services more accesible. LLMs can apply methods that are not available with the traditional approach. However, despite encouraging results, such as successful diagnosis of diseases and use of AI in mental health, there are many serious challenges. Any project in this area must reliably solve the issue of patient data privacy. It is also important to consider the needs of different population groups, especially the elderly, and to create intuitive interfaces and training programs. Successful integration of AI into medicine requires a systematic approach that will allow to use the benefits and minimize the risks. Ensuring data security, improving the accuracy of algorithms and trusting new technologies give hope that AI will become part of the healthcare system. References 2. A Review of Existing Regulatory Mechanisms to Address the Shortage of Doctors in Rural, Remote and Underserved Areas: A Study Across Five States in India [Electronic resource]: National Health Mission, Ministry of Health and Family Welfare, Government of India. – New-Dheli: National Health Systems Resource Centre, 2016. – Mode of Access: https://nhsrcindia.org/sites/default/files/2021-06/Regulatory%20study%20report.pdf. – Date of access: 14.11.2024 3. ThePrint [Electronic resourse]. – Mode of access: https://theprint.in/health/rural-india-has-an-80-shortfall-of-specialist-doctors-mp-gujarat-tamil-nadu-worst-off/2259874/. – Date of access: 14.11.2024 4. TASS [Electronic resource]. – Mode of access: https://tass.ru/obschestvo/20358475. – Date of access: 14.11.2024 5. 2021 National Healthcare Quality and Disparities Report[Electronic resource]. - Agency for Healthcare Research and Quality (US): 2021. – Mode of access: https://www.ncbi.nlm.nih.gov/books/NBK578533/#ch1.s5 6. Daily Mail [Electronic resource]. – Mode of access: https://www.dailymail.co.uk/health/article-12433231/ChatGPT-performs-resident-doctor-diagnosing-patients-prescribing-right-medication-primary-care-ER-departments.html 7. Daily Mail
[Electronic resource]. – Mode of access: https://www.dailymail.co.uk/health/article-12509111/ChatGPT-diagnosis-rare-condition.html 8. Cyberhaven [Electronic resource]. – Mode of access: https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt |