di , 15/11/2023

Artificial Intelligence has the potential to revolutionize the healthcare industry, but how patients react to content produced by AI tools remains a question. A recent study examines the impact of AI-powered chatbots in the treatment of urolithiasis.

AI has emerged as a transformative force in various fields, and its application in healthcare is no exception.

The use of Large Language Models in healthcare

Large Language Models (LLMs) represent a significant milestone in AI development. They empower machines to comprehend and produce human-like language. Among these models, the generative pre-trained transformer (GPT), particularly the GPT-3.5 model, has gained attention for its capacity to generate intricate responses across various languages. Recent advances in GPT-4 have expanded its capabilities, allowing images to be uploaded as input.

These AI models have the potential to transform the medical field, but understanding how patients perceive and engage with the content generated by these tools is equally crucial to effectively integrate them into healthcare facilities.

The study on patients in treatment for urolithiasis

In a study recently published in the Journal of Digital Health, Seong Hwan Kim and other authors analyzed a case study involving patients undergoing treatment for urolithiasis, a condition characterized by the formation of stones in the urinary tract. The authors examined how AI-powered chatbots, such as ChatGPT version 3.5, impacted patients’ perceptions both before and after receiving information about lifestyle changes aimed at preventing the recurrence of urolithiasis. The goal of this study was to illuminate the evolving relationship between patients and the use of artificial intelligence in healthcare.

Patients involved in the study were asked to complete questionnaires via a self-administered survey. An initial questionnaire was provided before the explanation on lifestyle modifications to prevent the recurrence of urolithiasis, while the next questionnaires were distributed after patients received the explanation generated by ChatGPT.

Inclusion criteria consisted of patients who had been diagnosed with urolithiasis through computed tomography, had undergone ureterorenoscopy treatment, and who were 18-80 years old. Patients who were unable to understand the ChatGPT indications, or to complete the questionnaires, were excluded.

The introduction of AI-based chatbots in healthcare can enhance patient engagement and education. However, the findings of the cited study revealed negative reactions from patients, especially among those with lower levels of education. This implies that this category of patients may have a more negative perception of AI-generated content, potentially due to an insufficient understanding of the digital world.

The perception of AI in healthcare: how is it influenced?

Like any potentially innovative technology, the perception of AI in healthcare is influenced by many factors, including the nature of the technology itself and the features of the individual patient. This influence is justified by one of the health determinants identified by the WHO, namely, the patient’s level of education. While AI has the potential to improve healthcare and clinical outcomes for patients, it must be used in a way that meets the needs of patients according to their educational background. This is particularly important in the medical field, where complete reliance on AI-generated information is pivotal for their well-being.

Urolithiasis can profoundly affect patients’ physical and mental health. Lifestyle changes play a crucial role in preventing the formation of stones, alongside adherence to specific dietary guidelines. AI-powered chatbots, like ChatGPT, can help patients understand and follow these recommendations, summarizing complex information and offering guidance in a straightforward manner.

The reliability of AI-generated content

The reliability of AI-generated content continues to be a significant topic of debate. Chatbots like ChatGPT depend on data collected from the Internet, which may contain inaccuracies and errors. Ensuring the reliability of AI-generated medical information is essential for reducing risks for patients.

For this very reason, future developments of AI chatbots should prioritize the verification of medical content to increase its reliability in clinical settings. Allowing physicians to program the AI themselves should be considered to ensure the accuracy of the information and the responses provided to users.

Despite the challenges, chatbots are being used to simplify low-complexity tasks and improve the flow of information in healthcare. They aid physicians by summarizing clinical information, managing health records, and offering advice from evidence-based data. However, generative artificial intelligence cannot replace human physicians. To ensure responsible use in the medical field, problems surrounding data accuracy and the generation of false information need to be addressed as soon as possible.

Conclusions

In conclusion, the study highlighted how patients perceive AI in healthcare and how this perception is changing, specifically in the management of urolithiasis. As previously mentioned, patients with lower levels of education expressed a negative evaluation after receiving an explanation generated by ChatGPT. Even though chatbots have the potential to improve patient knowledge and engagement, some points need to be resolved to integrate them effectively.

To improve patient perceptions and promote the correct adoption of AI in healthcare, it is necessary to develop user-friendly interfaces, provide clear and accurate information, and prioritize authoritative verification of medical content. Chatbots have their limits, but their continued development and improvement promises to enhance healthcare delivery and its clinical outcomes in the future.