NRF 2025

Join Akeneo at Retail's Big Show in NYC and elevate your product experience strategy!

Register NowRegister NowRegister Now
Akeneo-Logo Akeneo-Logo

AI Glossary

The 101 on all the AI terms you need to know whether you’re new to using AI or have used it before. See something missing? Send us a note and we’ll get it added!

A training approach where the algorithm selectively chooses a particular range of examples to learn instead of blindly searching for a diverse range of labeled examples.

A sophisticated algorithm that effectively gives each parameter an independent learning rate and incorporates past knowledge for gradient-based optimization.

A field of AI safety research that aims to build safe, secure AI systems that produce accurate, desired outcomes.

The process of identifying outliers in your dataset to ensure conformity and accuracy.

The theory and development of computer systems that aim to mimic the problem-solving and decision-making capabilities of the human mind.

The process of automating machine learning tasks, from data preparation to deployment, to assist non-technical users by simplifying complex processes, saving them time and improving prediction accuracy.

A method in artificial intelligence that is inspired by the human brain, teaching computers to recognize complex patterns and produce accurate insights or predictions based on pictures, texts, and sounds.

Utilizing artificial intelligence technology to create original content from scratch, including text, imagery, and audio content.

Developed by OpenAI, this machine learning model is trained using data from the internet to generate any type of text. It only requires a small amount of input text to create large volumes of relevant, sophisticated response.

The process of linking abstract knowledge from an AI system to contextualized, real-life examples to produce better predictions.

When a Large Language Model (LLM) generates false information because the model has no understanding of the context of the input provided, and the language generated is technically grammatically and semantically correct.

The layer in a neural network that connects the input layer of features with the prediction in the output layer.

 A language model characterized by its large size. Their size is enabled by AI accelerators, which are able to process vast amounts of text data, mostly scraped from the Internet. Notable examples include OpenAI‘s GPT models (e.g., GPT-3.5 and GPT-4, used in ChatGPT), Google‘s PaLM (used in Bard), and Meta‘s LLaMa, as well as BLOOM, Ernie 3.0 Titan, and Claude.

A set of instructions used in machine learning that allows a computer program to extrapolate information from training data and use what it learns to make predictions about a new input. The math and logic of these algorithms can improve on their own over time as more data is provided.

The number that tells the algorithm how heavily to adjust weights and biases of different data points.

A mathematical function that calculates how far a model’s prediction is from its label. The goal of training an algorithm is to improve prediction accuracy and minimize the loss that is produced.

The use and development of computer systems that are able to learn and adapt without following explicit instructions by drawing inferences by analyzing algorithms and statistical models.

A method in artificial intelligence that is inspired by the way humans process language, coaching computers to understand text and spoken words, complete with the speaker or writer’s original intent and sentiment.

A model that can mimic complex nonlinear relationships between features and labels through neurons that connect to nodes in different layers.

Artificial intelligence projects that are open to the public to develop, with the goal being to collaborate and learn with the community. Open-source models are typically faster, more innovative, and more customizable, but pose some obvious security and liability risks.

A comprehensive strategy to build and deliver world-class product experiences across every customer touchpoint to accelerate growth, stay competitive, and support the organization’s overall goals.

The art of creating well-structured prompts that elicit the desired output from a large language model.

Artificial intelligence projects that are developed, packaged, and sold by a single organization. Proprietary models are typically better funded than open-source models so they can often afford to implement new advances quickly and have the resources to support agility and scalability in an uncertain market.

Training a model on specific features and their corresponding attributes, similar to a student studying a set of questions and their answers.

A set structure used to organize and categorize a vast amount of product information in a logical, easy-to-understand way. The main goal is to present both structured and unstructured product data in a way that is quickly digestible for both internal teams and consumers.

Training a model to identify patterns in a specific dataset and generate educated predictions, which can be particularly useful with unlabeled datasets.