Getting lost in the terminology of the AI world? Then this article is for you. We've put together the most searched and most used terms related to AI. You can use this article as a quick cheat sheet to help you understand terms you're likely to encounter more and more often.
We explain everything simply so that everyone can understand the concepts and you can read them during the five-minute break between meetings. Without unnecessary sauce and in a practical way.
Let's start at the beginning, what is it..
AI
Artificial intelligence (AI) is a technology that allows computers and machines to simulate human learning, understanding, problem solving, decision making, creativity and autonomy. Think of it as an assistant that can process information in a similar way to a human. The concept of AI is not new - it originated in the 1950s. But the real boom has only come in recent years, thanks to new models that have greatly streamlined natural language understanding - in particular, so-called large language models (LLMs) have been groundbreaking. A typical example is ChatGPT.
Apple's Siri, for example, also uses elements of AI - and has been around since 2011.
Machine learning
Machine Learning is a field of artificial intelligence that allows computers to learn from data - without having to be programmed step by step.
Models learn to recognize patterns, relationships and rules that are not visible in the data at first glance. This enables them to make predictions, classify or make decisions on their own, even over new information they have never seen before.
The goal is to get them to get progressively better at what they do as the data comes in.
Deep learning
Deep learning is a type of machine learning that teaches AI to recognise patterns in data using neural networks - structures inspired by the human brain.
The more "layers" (neurons) a model has, the more complex things it can understand. While classical Machine Learning often makes do with smaller amounts of data and simpler algorithms, deep learning needs larger amounts of data and more powerful computing.
LLM
Large Language Model (LLM) is a form of artificial intelligence that specializes in working with language - it understands texts, can write, summarize, translate or answer questions. It works and learns based on extensive analysis of linguistic data available on the internet. They are behind the creation of tools such as ChatGPT, Gemini or Claude.
Author
Bára Mrkáčková
People & Marketing Coordinator
I am in charge of keeping the employees at DXH happy. I manage all things related to recruitment, employer branding, and event planning. I also take care of our marketing.
Chatbot
A chatbot is a program that communicates with people using text or voice. It usually helps with simple tasks - answering questions, making reservations or dealing with customer support queries.
It used to work mainly according to pre-written rules (e.g. "if a user types A, answer B"). Today, modern chatbots use various forms of artificial intelligence, and increasingly they are large language models (LLMs), so they can respond more intelligently and naturally.
The important thing is that chatbot ≠ AI. Most web chatbots still work without using AI.
Agent
An agent is a type of AI that can perform more complex tasks on its own - without you giving it each step individually. Just tell it the goal, and it will plan its own way to it.
While a conventional model like ChatGPT answers one specific prompt, an agent can break the task into smaller parts, think about the process and make decisions in multiple steps. It is suitable for more complex tasks, planning and automation.
Tools that already work with agents include OpenAI Deep Research or advanced versions of ChatGPT.
Multi-agent
Multi-agent is a system in which multiple AI agents work together at the same time - each has its own task, but together they solve one larger problem. They work autonomously but collaborate together, much like a team of humans.
Practically? A multi-agent system could, for example, plan a marketing campaign by itself: one agent researches the market, another prepares the texts, a third analyses the results and optimises the strategy in real time.
Currently, there are only a minimum of multi-agent systems in production. Most of them have not yet reached a level where they can undergo extensive user testing. These are very robust solutions.
Prompt
A Prompt is an assignment you give to an AI. It can be a question, a sentence, a task or even a description of a picture. By giving a prompt, you're actually saying: "That's what I want you to do." Knowing how to prompt correctly is the key to getting a good result from AI.
Prompt Engineering
Prompt engineering is the skill that determines how to properly instruct the AI so that the output is as accurate, practical and meaningful as possible.
Prompt engineers devise cleverly worded inputs (prompts) to make AI tools work exactly as they should - for example, to type in the right tone, respond according to context, or generate specific outputs.
Token
A token is the smallest unit of text that the AI works with. It can be a whole word, part of a word, or even a punctuation mark.
The AI doesn't read text like humans do - instead, it breaks it into tokens. For example, the sentence "Hello world!" can be split into three tokens: "Hi.", "world." a "!".
The number of tokens affects how much the AI can "read" or "generate" at a time. The longer the input, the more tokens you consume. For many AI tools, you pay for the number of tokens you use or are limited in their use.
Embedding
Embedding is a way for AI to "translate" words and sentences into numbers (into vectors) that models can understand and thus work with.
Think of it as a map where the AI stores words that are similar in meaning close together - for example "cat" a "dog" lies closer than "cat" a "table".
With embeddings, AI can search by meaning, compare texts or recommend content that is related.
Context window
The context window is the amount of text that the AI can "hold in its head" within a single task or conversation.
For example, the ChatGPT-4o model can handle up to 128,000 tokens, which is equivalent to about 300 pages of text. Older or simpler models, however, only handle a few thousand tokens.
The larger the context window, the better the AI understands the context, stays on topic, and responds consistently even in more complex tasks.
Fine-tuning
Fine-tuning is the process of "tweaking" an already pre-trained AI model for a specific purpose or data type.
Think of it as retraining a generally smart model to be a specialist - perhaps an AI that understands a little bit of everything, but after fine-tuning, focuses only on legal texts, customer support, or specific brand tone.
It is used when you want the model to be more responsive to specific requirements.
Reasoning model
A Reasoning model is a type of language model that does not just focus on what it has already "seen" in the training data, but can reason logically, plan actions and solve problems.
While the conventional model generates answers based on similarities in the texts, the reasoning model tries to find a step-by-step path to the answer - much like a person thinking about a complex problem.
Reasoning models include GPT 4o, Gemini 2.5 Pro or DeepSeek R1.
Generative AI
Generative AI is a type of artificial intelligence that creates new content - text, images, sound, code or videos.
Unlike other models, such as predictive AI, which works with statistics or time series, generative AI creates something new based on what it has learned from training data.
In practice, it is often mistakenly confused with LLMs, which are a form of generative models, but it is a broader term that includes models for creating images, sounds or videos.
Deep research
Deep research refers to the ability of AI to find, sort and evaluate information from different sources to give you the most accurate answer.
It's not just a quick answer to a question, but a comprehensive search. For example, it is used in the writing of technical texts, market analysis or strategic decision-making. It is coupled with tools that have access to actual data on the internet (e.g. ChatGPT Deep Research, Perplexity.ai or Consensus).
Benchmark
Benchmark is a standardized test that measures and compares the performance of AI models.
Models like ChatGPT, Claude or Gemini are regularly tested on benchmarks like MMLU (for general knowledge) or GPQA. The results are then often used as a basis for selecting the appropriate tool.
Benchmarks are useful for orientation, but often test only narrow skills. In real-world use, a model with a lower score may well perform better.
You can see how the individual models are doing by here.
NLP
Natural Language Processing is a field of AI that deals with understanding human speech.
Thanks to NLP, AI can read, write, answer questions or even recognise the tone of a message.
AGI
Artificial General Intelligence is a hypothetical type of AI that could handle any task as well (or better) than a human.
AGI could independently invent a scientific theory, settle a legal dispute, or run a business. It doesn't exist yet, but companies like OpenAI and Anthropic are trying.
RAG
Retrieval augmented generation is a way in which AI is connected to external data. For example, with the company's actual data, which it works with and generates answers based on it. This makes it work with verified information - making it more accurate and reliable.
RAG is usually used for example for corporate chatbots that respond according to internal documentation, or AI assistants that draw from actual articles, contracts or data.
CAG
Cache-Augmented Generation works by having the AI store important information in its context window in advance so that it can use it later. This allows it to react faster and stay on topic because it already has the knowledge it needs with it.Unlike other approaches, it doesn't look for information at the moment of a query - it has it ready.
For example, it is used in tools where the same context is repeatedly worked with - for example, chatbots that solve a multi-step task
Conclusion
The world of AI is changing incredibly fast - and with it the number of new concepts humans are encountering is growing. So we'll be working on version 2.0 (and maybe even 3.0) in time.
We hope that after reading this article these concepts make more sense to you and will be useful in practice. 🚀
The last piece in this series will examine how to use AI technologies to optimize performance on a variety of tasks, including technical writing, marketing, organizing your work, creating graphics, and much more. Let's look at it together.
Discover the benefits and potential drawbacks of AI, including its impact on healthcare, education, the tech industry, job displacement, and security risks.