Since making its debut, Generative AI has made significant strides, promising to transform both work and leisure. Tools like ChatGPT, CoPilot, and Google’s Gemini have fueled expectations of a radically different future. However, a year and a half later, many aspects of life remain unchanged. Despite this, early adopters are finding innovative ways to incorporate AI into their personal and professional routines. As AI enters the healthcare sector, important questions arise: Is it welcome? Is it safe? Will it simplify our lives or complicate them?
A recent webinar organized by BMJ Future Health and featuring Dr Keith Grimes, GP &Digital Health Consultant, provided a deep dive into the world of Large Language Models (LLMs), offering insights into their construction, training, strengths, weaknesses, and applications in healthcare. The aim was to equip clinicians with the knowledge to safely use Generative AI and generate effective prompts to maximize AI assistance.
LLMs, based on deep learning architecture, are trained on extensive datasets to predict text based on learned patterns. The larger the model and the more data it’s trained on, the more capable it becomes. For example, GPT-4, the latest model by OpenAI, was trained on 13 trillion tokens, representing a substantial portion of the public internet.
Generative AI models are capable of generating detailed content from brief prompts, condensing large volumes of information into concise summaries, translating text between different languages and levels of complexity, and simulating reasoning processes to assist with decision-making. However, significant limitations exist. These models can hallucinate, producing factually incorrect or nonsensical information. They can also reflect biases present in their training data, potentially leading to unfair or harmful outcomes. Furthermore, these models are not designed to perform mathematical calculations or access real-time information beyond their training data.
In the healthcare sector, Generative AI can be a powerful tool, assisting with various tasks. It can help personalize medicine by creating tailored treatment plans, automate the creation of referral letters, discharge summaries, and patient records, and offer insights based on extensive medical literature and guidelines. AI should be used as a supportive tool rather than a standalone solution, with a human-in-the-loop approach where AI assists clinicians but the final decisions and validations are made by human professionals.
Effective use of Generative AI hinges on well-crafted prompts. Techniques for optimizing AI interactions include providing clear roles, objectives, and context in prompts to guide AI responses. Encouraging the AI to think through tasks methodically can enhance accuracy, and utilizing platform-specific features, like ChatGPT’s custom GPTs, can tailor AI capabilities to specific needs.
The ethical and regulatory challenges of using AI in healthcare are significant. Current LLMs are not classified as medical devices, meaning they should be used cautiously. Ensuring the data used for training is representative and free of sensitive information is crucial to preventing biases and data breaches.