Applicable AI
- Jonathan Tan
- Oct 15, 2024
- 6 min read

Artificial Intelligence (AI) is everywhere—from social media to tech podcasts, it's hard to avoid the buzz. You might hear terms like "AI" and "LLMs" (large language models) being tossed around in meetings or read about them online. But how do you cut through the noise and understand AI in ways that matter for your work? Let's dive into some practical uses of AI, which is becoming more and more relevant for professionals in many fields.

Figure 1: Google trend results for "GenAI"
Navigating AI buzzwords
With so much AI content being published, it can be hard to tell what the terms mean and how they relate to each other. A similar situation happened in 2017 when "machine learning" became a buzzword in the media. Many people found it confusing because there wasn't always a clear explanation of how the terms fit together or what they meant.

Figure 2: Map of common data science buzzwords
Artificial Intelligence (AI) is a branch of computer science focused on building systems or machines that can perform tasks typically requiring human intelligence. These tasks include things like learning, problem-solving, understanding language, and even creativity. AI is made up of key components such as machine learning, natural language processing, computer vision, robotics, and cognitive computing.
There are two main types of AI: narrow AI and Artificial General Intelligence (AGI). Currently, we're aiming for AGI, which refers to AI that can perform any intellectual task a human can, but we've made significant progress towards it with the help of large language models (LLMs).
Large language models (LLMs) have become a crucial part of AI, both in practical applications and theoretical research. The development of language models goes back to the mid-20th century, when researchers created systems for tasks like translating Russian to English. At that time, the belief was that rule-based logic systems would lead to human-like intelligence. However, by the 1980s, it became clear that these systems weren't flexible or scalable enough to handle tasks like common-sense reasoning and inference-making. Researchers then shifted toward machine learning, where systems learn from large datasets rather than following predefined rules. This approach brought progress in areas such as computer vision and speech recognition. By 2010, deep learning began to show its potential, supported by advancements in hardware, and significant breakthroughs were made in image and speech tasks. Though these models were narrow in scope, they laid the groundwork for language model improvements like word2vec, ELMo, and LSTMs. In 2018, Google introduced BERT (Bidirectional Encoder Representations from Transformers), which focused on attention mechanisms in neural networks. The success of BERT led to numerous variants, with models boasting hundreds of millions of parameters. In 2020, OpenAI's GPT-3 marked the beginning of the modern LLM era, using over 175 billion parameters—about 10,000 times more than earlier models. As of today, the largest LLMs have demonstrated human-level reasoning across benchmarks by generating natural language from billions of parameters and data points.
Generative AI (GenAI) refers to AI systems capable of creating content, and many GenAI applications, such as ChatGPT, have LLMs at their core. These systems are made up of three key components: the infrastructure layer, model layer, and application layer. The infrastructure layer involves data storage, vector databases, and computing power; the model layer includes the predictive model (LLM); and the application layer consists of APIs, SDKs, user interfaces, and security features. Building GenAI systems from scratch requires substantial investment, often in the millions or billions of dollars for computational resources, research, and data processing. This can be impractical for most professionals, but there are cost-effective ways to use pre-built LLMs without the need for fine-tuning, which is not covered here.

Figure 3: Simplified view of how the layers of a GenAI application like ChatGPT might interact with each other
source: https://algodaily.com/
AI in its current state, although impressive, cannot do everything a human can in a single application. A quick search on the internet will reveal many weird and funny replies from various apps that are clearly wrong, or at least not what a human would answer.
GenAI as a search engine
One of the simplest and most cost-effective uses of Generative AI (GenAI) is treating it like an advanced search engine. While there are some differences between using a GenAI tool and a traditional search engine, let's compare Google and ChatGPT to highlight the contrast.
Imagine you work in an insurance company and need to gather competitor quotes for a report on market positioning. Normally, you'd visit an aggregator website (if available) or perform multiple Google searches to manually check prices across several companies. With a tool like ChatGPT, this process becomes more efficient. You could ask a question like, "What are the travel insurance prices for a 35-year-old flying from Sydney to Bali from October 31 to November 30 across companies in Australia?" ChatGPT would summarize information and potentially offer price comparisons from multiple insurers, saving you from sifting through multiple web pages.

Unlike Google Search, which provides a list of websites, ChatGPT delivers curated results based on the context of your question without the influence of advertisements (as of this writing, at least). It's important to note that ChatGPT and other GenAI applications may rely on outdated information, as their training data typically has a cutoff (end of 2023 in this case). So, if you need the most up-to-date information, be cautious. Also, while ChatGPT's results are broader than what you'd find on individual websites, they are generated from a single prompt and can be fine-tuned for more accuracy with follow-up questions.
GenAI as a rubber duck
In software engineering, rubber ducking is a term to mean debugging code by articulating a problem in natural language. For technical professionals, GenAI offers a revolutionary way to tackle coding problems. Previously, a developer might search Stack Overflow for snippets of code or explanations about an error. While Stack Overflow provides excellent resources, it requires sifting through various answers (the term “answers” here can be a stretch, “replies” are often more apt), often outdated or incomplete. A GenAI tool like ChatGPT, on the other hand, allows for dynamic conversations where you can iteratively ask for help. You can request: “How do I fix a TypeError in Python when using the Pandas library?” and not only will the AI generate a possible solution, but you can ask follow-up questions like, “Can you give me an example?”
This interaction is akin to having a personalized coding assistant that explains, corrects, and tests your code in real-time. Like the previous example, the accuracy of the reply does depend quite heavily on the amount of available data/documentation there is on the question at hand.
In fact, you don’t even need to bother communicating in natural language and can just paste the error message. The example below is an import error from Python.

Other applications that work well with GenAI from my experience are translating code from 1 language (or dialect) to another, writing repetitive boilerplate code, unit tests or docstrings and refactoring code. Detailed examples of these are covered in our GenAI for productivity courses.
GenAI as a model booster
This use case likely falls into the paid tier for most GenAI applications available today, as API access often requires a subscription. There is an alternative using open-source LLMs like Llama-3, but this would involve building your own API connectors and hosting the model on your own infrastructure.
Let’s say you need to classify free text descriptions of claim lodgements into ICD-10 categories. Traditionally, this might have been done manually, but as the volume of claims grows, manual classification becomes inefficient. A machine learning pipeline could automate this process.

In this example, you’d use a supervised learning model to predict the ICD-10 code based on a set of probabilities for each category. However, because ICD-10 codes are quite specific, the model might struggle to distinguish between closely related codes, especially if using logistic regression. To improve accuracy, you could incorporate a more powerful model—such as an LLM—to refine predictions.

The process would involve piping the top N predictions from your model into a prompt template. This, combined with additional data points, would be used to ask the GenAI model to select the most likely category. This hybrid approach leverages the power of a hosted LLM, making it cost-effective, while still maintaining a traditional parametric model for explainability.
Summary
While the examples provided focus on how an insurance professional might use AI, the approach can easily be extended to other industries and roles. These examples highlight ways to implement AI without significant investment, though there are many advanced methods—like adapters, prompt tuning, Retrieval-Augmented Generation (RAG), and chaining—that can make LLMs and GenAI applications even more efficient.
For more real-world case studies, our EF-12 Productivity with LLMs course offers practical guidance on how professionals can fully leverage the power of today’s GenAI tools, regardless of their technical background.
Σχόλια