ai

AI Explainer Part 1: Introductory Concepts

Eric Thanenthiran·11 March 2026·8 min read

In conversations with clients, I've realised that the list on AI terms commonly used today by specialists can be overwhelming. This glossary is our attempt to fix that. Straight-forward definitions for the terms that come up most often, written for business people rather than engineers. This is Part 1 in an ongoing series to help make this topic more accessible.

AI Glossary

AI (Artificial Intelligence)

Software that can independently perform human tasks like understanding language, recognising images, or making decisions. These systems have a degree independence in carrying out a task. It covers a much broader range of technologies than the Large Language Models that currently dominate the headlines. A spam filter learning to spot junk mail, a hospital system flagging abnormal scan results, a retail algorithm predicting which products to restock, and a fraud detection system catching suspicious transactions in real time are all AI. Large language models like ChatGPT are one branch of a much larger family of techniques, each suited to different problems and data types.

ML (Machine Learning)

A subset of AI where systems learn patterns from data rather than being explicitly programmed with rules. Instead of a developer writing out every possible scenario and telling the system what to do, you feed it examples and let it figure out the patterns itself. The more quality data it sees, the better it gets.

It helps to contrast it with traditional software. A traditional fraud detection system might have rules like "flag any transaction over £5,000 from a new device." An ML-based system instead learns from thousands of historical fraud cases, spotting subtle combinations of signals that no human would think to write a rule for, and keeps improving as new data comes in.

ML is a broad discipline with many different techniques under the hood, each suited to different types of problems and data:

  • Classification: Deciding which category something belongs to. Spam or not spam. Fraudulent or legitimate. High risk patient or low risk.
  • Regression: Predicting a numerical value. What will this property sell for? How many units will we shift next month?
  • Clustering: Finding natural groupings in data without being told what to look for. Often used in customer segmentation.
  • Recommendation systems: Powering the "you might also like" features on Netflix, Spotify, and Amazon.
  • Computer vision: Teaching machines to interpret images and video. Used in manufacturing defect detection, medical imaging, retail footfall analysis, and autonomous vehicles.
  • Time series forecasting: Predicting future values based on historical patterns. Useful for demand planning, energy management, and financial modelling.
  • Anomaly detection: Spotting things that look unusual compared to normal behaviour. Common in cybersecurity, equipment monitoring, and financial compliance.

LLMs are themselves a product of machine learning, but ML as a field is much older and broader. Many of the most valuable AI applications in businesses today are not chatbots at all, they are ML models quietly running in the background, making predictions, flagging issues, and optimising decisions at a scale no human team could match.

LLM (Large Language Model)

The type of AI behind tools like ChatGPT, Claude, and Gemini. Trained on enormous amounts of text scraped from books, websites, code repositories. These models develop a surprisingly deep understanding of language, context, and even reasoning. You interact with them in natural language, and they can write, summarise, translate, explain, classify, extract information, answer questions, and generate code. What makes LLMs particularly powerful for businesses is their flexibility. The same underlying model can draft a customer email, summarise a legal contract, extract key figures from a financial report, answer questions about your internal documentation, suggest fixes in a codebase, or help a new employee get up to speed. You are not building something from scratch for each use case, you are pointing a very capable general-purpose tool at your specific problem.

Some practical examples of where businesses are putting LLMs to work today:

  • Customer support: Handling routine queries automatically, with human agents stepping in for complex cases
  • Document processing: Reading invoices, contracts, or application forms and pulling out the information that matters
  • Internal knowledge bases: Letting staff ask questions in plain English and get answers drawn from company documentation
  • Content production: Drafting product descriptions, reports, marketing copy, or meeting summaries at scale
  • Code assistance: Helping developers write, review, and debug code faster
  • Sales and CRM: Summarising customer histories, drafting follow-up emails, flagging at-risk accounts

LLMs are not magic, and they are not always the right tool. They can make things up, they need careful handling, and they work best when paired with good data, clear instructions, and proper evaluation. But used well, they are one of the most versatile technologies businesses have had access to in a long time.

Provider

The company that builds, trains, and hosts a foundation model, making it accessible to developers and businesses, usually via an API. Most organisations will never train their own foundation model. Instead they connect to a provider's model over the internet and pay based on usage, much like subscribing to any other cloud service.

The main providers you are likely to encounter are:

  • OpenAI: The most recognised name in the space, behind GPT-4 and ChatGPT. A good default starting point for many use cases.
  • Anthropic: Makers of Claude, with a strong focus on safety and reliability. Particularly well regarded for tasks requiring careful reasoning and handling of long documents.
  • Google DeepMind: Behind the Gemini family of models, with deep integration into Google Cloud and Workspace products.
  • Meta: Releases its Llama models as open source, meaning businesses can download and run them on their own servers rather than sending data to a third party.
  • Mistral AI: A French company producing efficient, high-quality models that are popular in Europe, particularly where data residency and privacy regulations make self-hosting attractive.
  • xAI: Elon Musk's AI venture, producing the Grok model family. These routinely spark controversy, much like Elon himself.

Choosing a provider involves weighing up several factors beyond raw capability: pricing, data privacy and residency rules, reliability, how well the model performs on your specific tasks, and whether the provider's roadmap aligns with where your business is heading. It is rarely a permanent decision, and many organisations end up using more than one.

Foundation Model

A large, general-purpose AI model trained on vast amounts of data that can be adapted for many different tasks. Think of it as the engine underneath most modern AI products. Rather than building an AI system from scratch for every new problem, businesses and developers take a foundation model and adapt it to their specific needs, either by fine-tuning it on their own data, or simply by giving it the right instructions and context.

The term covers more than just language models. There are foundation models for images, audio, video, and even scientific domains like protein structures or satellite imagery. What they share is the same basic idea: train something enormous and general, then specialise it.

The major foundation models you are likely to encounter include:

  • GPT-4 and GPT-4o (OpenAI): Among the most widely used, powering ChatGPT and many third-party applications
  • Claude (Anthropic): Known for strong reasoning, long context handling, and a focus on safety
  • Gemini (Google): Google's flagship family of models, deeply integrated with their wider product ecosystem
  • Llama (Meta): An open source model that businesses can download and run on their own infrastructure, which appeals to organisations with strict data privacy requirements
  • Mistral (Mistral AI): A European provider producing capable, efficient models, popular for on-premise deployments
  • Whisper (OpenAI): A foundation model for speech recognition, turning audio into text

For most businesses, the decision is not whether to build a foundation model (that requires hundreds of millions in compute and data, and is the domain of a handful of well-funded labs) but which one to build on top of. Choosing the right foundation model for a given use case, balancing capability, cost, speed, privacy, and licensing, is one of the more consequential early decisions in any AI project.

Wrap Up

That's the groundwork laid. The terms above come up constantly in AI conversations, and having a handle on them makes everything else easier to follow. In Part 2 we'll get into the models themselves, diving into common terms used to describe their operation and behaviours to give us a better chance at understand how they work (at a high level).

aiexplainer