The AI paradox: Augmenting and reducing the same problem

Leer en Castellano

The world of artificial intelligence (AI) is evolving so rapidly that it’s hard to keep up without exploring new technology every week. It doesn’t seem like much time has passed since I wrote about “AI in terms my mother could understand” (2019) and my first impressions of “ChatGPT” (2023/01), yet some of those concepts are already getting old. Large language models (LLMs) have become everyday tools, and generative technology, like ChatGPT, is the new shiny tool on the market. And it’s not just a fleeting trend; it’s one of the most significant advancements of the past decade. However, its rapid emergence has caused a stir, creating a genuine paradox.

The Evolution

If AI could help us predict a patient’s potential to develop a septic infection, LLMs now enable us to analyze text and treat it as any other information on a computer. For instance, while we once could count words in a text and search for the most repeated, today’s LLMs can turn that text into a series of semantic vectors and use math to discover the text’s topic, summarize it, or translate it into any language effortlessly. But we’re still talking about technology that existed a couple of years ago, so what changed to make everyone use AI nowadays?

ChatGPT

Two things: GPT4 and ChatGPT. GPT4 stands for Generative Pre-Training Transformer (4th generation) and can generate new text based on its training with millions of existing texts, a vast improvement over prior models. ChatGPT uses this technology to simulate human conversation. You say “Hello,” and ChatGPT, based on the millions of texts that were used in its training, predicts that the appropriate response is “Hello! How can I help you today?”. This response isn’t pre-programmed but constructed in real-time based on the context. And, as if by magic, millions started using AI for everyday tasks, from writing poems on a given topic to, finding specific jokes, to writing letters asking for a promotion, to converting various units of time and measurement.

The Paradox

But here’s the paradox. Consider a doctor who needs to communicate with an insurance company for coverage of a rare procedure for a patient. Crafting the letter can be intricate, laden with formal and technical language, requiring templates and clinical information. ChatGPT comes to mind since we know it can generate eloquent and comprehensive letters based on templates and contextual data. If we provide ChatGPT with some data from the Electronic Health Record and an insurance company letter template, the doctor would only need to briefly describe the letter’s purpose, and AI would handle the rest.

Trust me, the result is impressive. ChatGPT can use the provided information to generate a very persuasive letter consistent with the given information.

However, wouldn’t it be ironic if the insurance company, receiving hundreds or thousands of letters, used the same AI to eliminate superfluous language, identify essential information, and compress the letter’s content to its core message? The result? A concise paragraph that might resemble the original paragraph the doctor provided.

This seemingly circular process makes us question whether we needed to apply AI to this problem in the first place. If we inflate information for sending, but recipients have to deflate it to be useful, shouldn’t we have just sent concise information from the start? When we inflate and then deflate to get the same information, don’t they just cancel each other out? The answer is no, or at least, not today. Processes and regulations dictate these formats, and as long as they exist, AI and ChatGPT can help us optimize our conformity to these rules.

Prioritizing Simplicity Over Sophistication

However, we should always remain open to simplifying the process as a way to innovate without adding technology. The temptation to use advanced AI tools like ChatGPT is understandable. They’re efficient, highly accurate, and facilitate complex tasks. But, if used counterproductively, their value dissipates.

To address this, perhaps the answer isn’t always in exploiting technology more but in rethinking our processes. Maybe what we need is a cultural shift in our communication methods. Instead of opting for verbose and complicated correspondence, we could coordinate with stakeholders to prioritize brevity and clarity.

The True Potential of Generative AI

AI has undeniable value in many scenarios, from predictive analysis to treatment personalization. And generative AI, like ChatGPT, is opening doors once deemed impossible. The key is distinguishing where AI adds value and where the existing process could be improved to make it more efficient.

So, let’s rethink the given example. The doctor wrote a short paragraph explaining their reasons to recommend a specific treatment to the insurance company. The language used was clinically heavy. This language wouldn’t be the same when explaining the treatment need to the patient. Wait a minute, isn’t ChatGPT a language genius? What if the doctor writes:

“A thymus resection via transternal is recommended. This organ, located in the anterior mediastinum, displays characteristic hypertrophy, similar to what’s seen in cases of myasthenia gravis and thymoma. It’s plausible it contains neoplastic cells, like those found in thymic carcinoma.”

Using this paragraph, ChatGPT drafts a letter for the insurer with the above, adding diagnostic and surgical codes. Presumptive Diagnosis: Myasthenia gravis: ICD-10 Code: G70.0
Thymoma: ICD-10 Code: D15.0
Thymic carcinoma (suspected): ICD-10 Code: C37
Proposed Surgical Procedure: Thymectomy via transternal approach: CPT Code 60520

So far, everything works the same as in the previous example, but then, what if using the same paragraph, ChatGPT also writes a letter to the patient, translating the medical terms into layman’s language:

“We recommend surgery to remove a part of the thymus, a gland in the chest. This gland appears larger than usual, similar to what’s seen in some conditions. It might contain abnormal cells, like those found in specific cancer types.”

Food for Thought

So, the paradox doesn’t diminish AI’s value but helps us ponder the scenarios where it can add value.

The paradox highlighted by the doctor-insurance company interaction serves as a reminder that the most “brilliant” solution isn’t always the best. Before integrating AI into a process, it’s crucial to evaluate if it’s the right tool for the job or if a human-centered approach might suffice.

In the end, AI is a tool, not a solution in itself. Our challenge is discerning its optimal applications and avoiding complicating processes that could be simplified otherwise.