Glossary

Learn about key AI terms and AI Chatbot Hub definitions exploring this beginner-friendly glossary to strengthen your AI vocabulary.

Table of contents

AI Agents

These are specialized AI models that can perform specific tasks within your chatbot. For example, they can handle customer inquiries, provide recommendations, or perform other tasks. Each chatbot can have a certain number of AI Agents assigned to it.

From our blog: What are AI Agents?

Context window

The “context window” refers to the amount of text a language model can look back on and reference when generating new text. A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.

The context window is often measured in tokens (pieces of words or whole words). For example, a model with a 4,000-token context window can process approximately 3,000 words at once (because common words like "and," "the," or "to" take up fewer tokens).

Feature credits (FC)

These are used to power advanced functions within the AI Chatbot Hub platform. Currently, FCs are used for sending automated emails when new leads are collected and for real-time knowledge enrichment from Google Search. In the future, they will also be used for features like generative image creation and text-to-speech options.

Fine-tuning

Fine-tuning is the process of further training a pretrained language model using additional data. This causes the model to start representing and mimicking the patterns and characteristics of the fine-tuning dataset. Fine-tuning can be useful for adapting a language model to a specific domain, task, or writing style, but it requires careful consideration of the fine-tuning data and the potential impact on the model’s performance and biases.

Function calling (advanced)

This feature allows AI agents to call specific functions or APIs to perform tasks like retrieving data from a database or triggering an action in another system.

Here are a couple of real-life examples of function calling in action:

  1. Booking a Restaurant Reservation: In a chatbot for a dining app, function calling enables the AI to process a user’s request for a reservation. When the user says, “Book a table for two at 7 PM at Mario’s Italian,” the model calls a reservation API function, automatically passing in the details to confirm the booking and providing a response with the reservation details.

  2. Tracking an Order in E-commerce: When a customer asks, “Where’s my order?” an AI assistant can call an order tracking function that retrieves the latest shipment status. This function pulls data from the company’s logistics system and returns the current location, estimated delivery date, or any recent updates, all within the same chat.

View the Function calling guide

Guardrails

Guardrails refers to restrictions and rules placed on AI systems to make sure that they handle data appropriately and don't generate unethical content.

Hallucination

Hallucination refers to an incorrect response from an AI system, or false information in an output that is presented as factual information.

Intents

Intents are the goals or purposes behind a user’s input, guiding the AI to generate a relevant response. For example, if a user types “Tell me the weather,” the intent is to get a weather update, or if they ask “Book a table,” the intent is to make a restaurant reservation.

Examples of intents:

  • Asking for product details: “What’s the price of this item?”

  • Seeking assistance: “I need help with my account.”

  • Making an appointment: “Book me in for 7 PM.”

Learn more - Fine Tuning Agent Intents

LLM

Large language models (LLMs) are AI language models with many parameters that are capable of performing a variety of surprisingly useful tasks. These models are trained on vast amounts of text data and can generate human-like text, answer questions, summarize information, and more.

Inside AI Chatbot Hub there are 3 main LLMs that can be used: OpenAI, Anthropic and Gemini.

Prompt

A prompt is an input that a user feeds to an AI system in order to get a desired result or output.

Prompt engineering

Prompt engineering is a technical term for a straightforward action: It means prompting (or requesting) a generative AI tool to perform a task. Strong prompt engineering typically requires refining your prompts with context to get the most specific—and useful—result.

From our blog: What is prompt engineering?

RAG (Retrieval augmented generation)

Retrieval-Augmented Generation (RAG) combines information retrieval with language model generation to enhance accuracy by grounding responses in external knowledge sources. In RAG, relevant data from a knowledge base is retrieved in real-time based on the input prompt and then fed into the model, allowing it to generate responses with greater factual accuracy and relevance by using external information rather than relying solely on its training data.

Examples of RAG

  • Customer Support Knowledge Base: When a customer asks about troubleshooting steps for a product, RAG can retrieve the latest support articles or FAQs from a knowledge base, allowing the model to provide accurate, up-to-date guidance without relying on memorized responses.

  • Product Information in E-commerce: If a shopper asks about product specifications or inventory status, RAG can pull real-time data from a catalog or inventory database, giving precise answers about size availability, materials, or shipping options.

Storage space

This is the amount of raw data storage (in MBs) allocated to your account for storing chatbot data, user interactions, and other relevant information. Different plans offer varying amounts of storage space.

Tags

Inside AI Chatbot Hub these are tags automatically generated by the AI to categorize and organize conversations based on their content. You can specify the criteria for assigning tags.

Temperature

Temperature is a parameter that controls the randomness of a model’s predictions during text generation. Higher temperatures lead to more creative and diverse outputs, allowing for multiple variations in phrasing and, in the case of fiction, variation in answers as well. Lower temperatures result in more conservative and deterministic outputs that stick to the most probable phrasing and answers.

Tokens

Tokens are the smallest individual units of a language model, and can correspond to words, subwords, characters, or even bytes (in the case of Unicode). A token approximately represents 3.5 English characters, though the exact number can vary depending on the language used.

Learn more: OpenAI - What are tokens and how to count them?

Training data

Training data is the information or examples given to an AI system to enable it to learn, find patterns, and create new content. Inside AI Chatbot Hub it could be anything from word and pdf files, tables and csv files, YouTube videos, images, URLs from websites, and more.

Learn more: best practices for preparing training data

Variables

These are used to store and manage dynamic data within each chat session, such as user's personal data or conversation specific details depending on the use case.


Did we miss any term? Let us know!

Last updated