Search Dental Tribune

ChatGPT limited in answering complex oral health questions, study finds

Patients increasingly use online resources to access oral health information, including tobacco-related topics. (Image: Rawpixel.com/Adobe Stock)

Mon. 11. August 2025

save

AMMAN, Jordan: As artificial intelligence continues to expand its role in healthcare, large language models such as ChatGPT are increasingly being used to seek health-related information. In dentistry, where patient education is essential and misinformation online is on the rise, it is particularly important to evaluate the reliability of such tools. Given the well-established impact of tobacco use on oral health—including its association with periodontal disease, tooth loss, oral cancer and impaired healing—a new study has assessed how effectively ChatGPT responds to public questions on this important topic.

The researchers—from the University of Jordan and the Kuwait Ministry of Health in Sulaibikhat—generated a pool of commonly asked questions using online tools. These questions, frequently posed to chatbots and search engines, covered areas such as periodontal conditions, teeth and general oral health, soft-tissue health, oral surgery, and oral hygiene- and breath-related concerns and were reduced to 119 questions. Responses were generated by ChatGPT 3.5 and assessed by the researchers based on the parameters of usefulness, readability, quality, reliability and actionability. 

Most responses were judged to be either very useful (36.1%) or useful (42.0%) and of moderate (41.2%) to good quality (37.0%). However, just 23.5% of responses scored highly in terms of actionability and 35.3% were found to be only moderately easy to read.  

When responses across the different topics were compared, the tool’s scores were found to vary. Responses to questions on specialised topics were less useful, including those relating to the effects of tobacco use on oral soft tissue and on oral surgeries. 

Commenting on previous studies that have found that ChatGPT excels in specialised subjects and is even capable of passing dental examinations, the researchers highlighted that the format of the prompting questions affects the quality and reliability of responses. “This discrepancy may be due to differences in question formats, as our general inquiries differ from the structured, academic nature of test-based questions,” they explained. 

The technology was found to competently address general questions and to perform more poorly on more detailed questions and/or specialised topics. “For example, ChatGPT failed to consistently generate accurate and comprehensive responses to the majority of radiation oncology patient-centred questions, particularly across less common cancers and with ‘negative control’ questions that included incorrect assumptions,” the authors wrote. Highlighting that the usefulness of health-related information for the public rests on its readability and understandability, the researchers stressed the need for artificial intelligence tools to effectively balance quality and simplicity in their responses. 

In summary, the authors concluded that ChatGPT is a valuable tool for providing general information related to the effects of tobacco use on oral health, but that it faces challenges in readability, consistent actionability and quality of responses on specialised topics. “While ChatGPT can effectively supplement healthcare education, it should not replace professional dental advice,” the authors concluded. 

The study, titled “Assessing ChatGPT’s suitability in responding to the public’s inquiries on the effects of smoking on oral health”, was published online on 19 July 2025 in BMC Oral Health. 

Tags:
To post a reply please login or register
advertisement