Glossary
Every term, person, and concept referenced across Fenris content. Search or filter by category.
Showing 66 of 66 entries
A
Agentic AI
AI systems that can take autonomous actions and execute multi-step workflows rather than just generating text responses. Agentic AI can use tools, browse the web, write and run code, and complete complex tasks with minimal human intervention.
AI Slop
A dismissive term for content generated with AI assistance. Critics use it to describe output that feels generic or low-effort. However, the label often misses the deeper shift: AI-assisted creation expands access to building, writing, and creating for people who previously lacked the technical skills or resources.
AI Winter
A period when AI research funding and interest collapse due to unmet expectations. There were two major AI winters: 1974-1980 (after the Lighthill report) and 1987-early 1990s (after expert systems failed commercially). During the second winter, researchers rebranded their work to avoid the toxic "AI" label.
Alan Turing
British mathematician who published "Computing Machinery and Intelligence" in 1950, proposing what became known as the Turing Test. Widely considered one of the founders of computer science and artificial intelligence.
AlexNet
A deep neural network that won the 2012 ImageNet competition by a landslide, beating the runner-up by over 10 percentage points. Built by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, AlexNet proved that deep learning worked and is considered the inflection point for modern AI.
ALPAC Report
A 1966 U.S. government report that evaluated machine translation and concluded computers were nowhere close to matching human translators. One of the early blows to AI optimism during the first golden age.
AlphaGo
A program by Google DeepMind that defeated world Go champion Lee Sedol in 2016. Go was considered far too complex for brute-force AI, making this a landmark achievement.
Anthropic
An AI safety company that builds the Claude family of models. Founded by former OpenAI researchers, Anthropic focuses on building AI systems that are safe, helpful, and honest.
Arthur Samuel
IBM researcher who built one of the first programs that could learn from experience (a checkers player, 1952) and coined the term "machine learning."
Artificial Intelligence (AI)
The field of computer science focused on building systems that can perform tasks typically requiring human intelligence, such as understanding language, recognizing patterns, and making decisions. The term was coined in 1956 at the Dartmouth Workshop.
B
Backpropagation
A training technique that allows neural networks to learn by adjusting their internal weights based on errors. Revived in the 1980s by researchers like David Rumelhart, it became essential to the deep learning revolution.
C
ChatGPT
A conversational AI product by OpenAI launched in November 2022. It reached 100 million users in two months, the fastest consumer technology adoption in history. ChatGPT is widely credited with bringing AI into mainstream public consciousness.
Claude
A family of AI models built by Anthropic, launched in 2023. Known for strong reasoning, long context windows, and a focus on safety and helpfulness.
Claude Shannon
Known as the father of information theory. Co-organized the 1956 Dartmouth Workshop that gave AI its name.
Claw
A category of AI agent that can autonomously execute tasks on your computer rather than just generating text responses. The term comes from OpenClaw's lobster mascot, where the "claw" metaphor represents grabbing and completing tasks. Claws represent the shift from generative AI (AI that writes) to agentic AI (AI that acts).
Context Window
The amount of text (measured in tokens) that an AI model can process in a single conversation. Modern models in 2025-2026 support context windows of over a million tokens, roughly equivalent to several novels.
D
DALL-E
An AI image generation model by OpenAI that creates images from text descriptions. First demonstrated in 2021, it was one of the earliest high-profile generative AI tools.
Dartmouth Workshop
An eight-week workshop held at Dartmouth College in the summer of 1956, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Universally considered the birth of AI as a field. The funding proposal contains the first known use of the term "artificial intelligence."
Deep Blue
An IBM chess computer that defeated world champion Garry Kasparov in 1997. A major public milestone for AI, though it was narrow AI built specifically for chess.
Deep Learning
A subset of machine learning that uses neural networks with many layers (hence "deep") to learn complex patterns in data. The deep learning revolution began in 2012 when AlexNet demonstrated its power on image recognition.
Diffusion Model
A type of generative AI that creates images by gradually removing noise from a random starting point. Stable Diffusion and Midjourney use this approach. Diffusion models largely replaced GANs for image generation starting around 2022.
E
EU AI Act
The world's first comprehensive AI regulatory framework, passed by the European Union in 2024. It classifies AI systems by risk level and establishes rules for how they can be developed and deployed.
Expert System
A type of AI from the 1980s that encoded human specialist knowledge as a set of if-then rules. Expert systems like XCON were commercially successful for a time but ultimately proved too brittle and expensive to maintain, contributing to the second AI winter.
F
Fei-Fei Li
Stanford researcher who led the creation of ImageNet (2007-2009), a dataset of over 14 million labeled images. ImageNet became the proving ground for deep learning when AlexNet dominated its 2012 competition.
Fifth Generation Computer Project
An ambitious Japanese national project launched in 1980 aimed at creating computers that could think. It spurred renewed global investment in AI during the 1980s expert systems boom.
G
Gemini
Google's family of large language models, successor to Bard. Gemini is multimodal, meaning it can process text, images, audio, and video.
Generative Adversarial Network (GAN)
A type of AI where two neural networks compete against each other: one generates content and the other evaluates it. Invented by Ian Goodfellow in 2014, GANs were an early approach to AI image generation before diffusion models.
Generative AI
AI systems that can create new content, including text, images, code, music, and video. The generative AI explosion began around 2022 with tools like ChatGPT, DALL-E, Stable Diffusion, and Midjourney.
Geoffrey Hinton
Often called the "Godfather of Deep Learning." Published key papers in 2006 on training deep neural networks and co-created AlexNet in 2012. His decades of work on neural networks, largely ignored during the AI winters, became the foundation of modern AI.
GitHub Copilot
An AI coding assistant launched in 2021 that suggests code as developers type. Built on OpenAI models, Copilot was one of the first AI tools to become part of daily professional workflows.
Google DeepMind
An AI research lab that created AlphaGo and contributes to Google's Gemini models. Originally founded as DeepMind in London, it merged with Google Brain in 2023.
GPT (Generative Pre-trained Transformer)
A series of large language models by OpenAI. GPT-1 (2018) introduced the approach, GPT-3 (2020) demonstrated that scale produces emergent capabilities, and GPT-4 (2023) became one of the most capable AI systems available.
GPU (Graphics Processing Unit)
A processor originally designed for rendering graphics in video games. GPUs turned out to be ideal for the parallel mathematical operations required to train neural networks. The availability of GPU computing was one of three factors that enabled the 2012 deep learning breakthrough.
H
Hito Steyerl
Artist and writer whose essay "In Defense of the Poor Image" (2009) argued that low-resolution, degraded images circulating online were dismissed as inferior but were actually democratic. Their "poorness" made them accessible and shareable, escaping gatekeepers. The same argument applies to AI-assisted creation.
I
Ian Goodfellow
AI researcher who invented Generative Adversarial Networks (GANs) in 2014, an early approach to AI-generated images.
Ilya Sutskever
AI researcher who co-created AlexNet with Krizhevsky and Hinton in 2012. Later became co-founder and chief scientist at OpenAI.
ImageNet
A dataset of over 14 million labeled images across 22,000 categories, assembled between 2007 and 2009 by Fei-Fei Li and her team at Stanford. ImageNet became the benchmark that proved deep learning worked when AlexNet dominated its competition in 2012.
In Defense of the Poor Image
A 2009 essay by Hito Steyerl arguing that low-quality images circulating online were dismissed as inferior but were actually democratic and accessible. The essay provides a framework for understanding why AI-created content should not be dismissed simply because it looks or sounds different from traditionally produced work.
IronClaw
A security-first alternative to OpenClaw built in Rust by NEAR AI. Runs in encrypted enclaves, stores credentials in a vault the AI never sees directly, and sandboxes every tool in WebAssembly. Designed for users and organizations who want agentic AI capabilities with stronger security guarantees than OpenClaw provides.
J
John McCarthy
Computer scientist who coined the term "artificial intelligence" and organized the 1956 Dartmouth Workshop. Founded the Stanford AI Laboratory.
L
Large Language Model (LLM)
An AI model trained on massive amounts of text data that can understand and generate human language. Examples include GPT-4, Claude, and Gemini. These models use the Transformer architecture and typically have billions of parameters.
Lighthill Report
A 1973 report by mathematician James Lighthill to the British Parliament concluding that AI research had failed to deliver on its promises. It led to massive funding cuts in the U.S. and UK, triggering the first AI winter.
Logic Theorist
Often called the first AI program, created by Allen Newell and Herbert Simon and demonstrated at the 1956 Dartmouth Workshop. It could prove mathematical theorems.
M
Machine Learning
A subset of AI where systems learn from data and improve their performance over time without being explicitly programmed for every scenario. The term was coined by Arthur Samuel in the 1950s while building a checkers program at IBM.
Marshall McLuhan
Media theorist who wrote "Understanding Media: The Extensions of Man" (1964) and coined the phrase "the medium is the message." His idea that the real impact of technology is not its content but how it changes the scale, speed, and structure of human interaction is central to understanding why AI matters beyond the quality of its output.
Marvin Minsky
Co-founder of the MIT AI Laboratory and one of the most influential figures in AI history. His 1969 book "Perceptrons" (with Seymour Papert) inadvertently killed neural network research for over a decade by highlighting its limitations.
Model Context Protocol (MCP)
An open protocol emerging in 2025-2026 that allows AI systems to connect with external tools and data sources in a standardized way. MCP enables agentic AI by giving models access to APIs, databases, and applications.
N
Narrow AI
AI designed for a specific task or domain. Deep Blue (chess) and AlphaGo (Go) are examples. Unlike general AI, narrow AI cannot transfer its abilities to different problems.
Natural Language Processing (NLP)
The branch of AI focused on enabling computers to understand, interpret, and generate human language. Modern NLP is powered by Transformer-based models like GPT and Claude.
NemoClaw
NVIDIA's open-source security layer for OpenClaw that adds enterprise-grade sandboxing, privacy controls, and policy guardrails. NemoClaw lets organizations deploy AI agents while maintaining control over how they handle data and behave. NVIDIA CEO Jensen Huang called it essential infrastructure, saying "every company now needs to have an OpenClaw strategy."
Neural Network
A computing system inspired by the biological neural networks in the brain. First modeled mathematically by McCulloch and Pitts in 1943, neural networks are the foundation of modern deep learning systems.
O
OpenAI
An AI research company that created GPT-3, GPT-4, ChatGPT, DALL-E, and Codex. Founded in 2015, OpenAI is one of the leading developers of frontier AI systems.
OpenClaw
A free, open-source AI agent created by Peter Steinberger that runs locally on your machine and connects to LLMs via messaging apps like WhatsApp, Telegram, and Discord. Unlike chatbots, OpenClaw can execute tasks: manage files, send emails, run code, and control applications. It went viral in January 2026, passing 250,000 GitHub stars and overtaking React. Steinberger joined OpenAI in February 2026.
P
Parameters
The internal values a model learns during training. More parameters generally means more capacity to learn patterns. GPT-3 has 175 billion parameters. Parameter count is often used as a rough measure of model size.
Perceptron
An early type of neural network. Minsky and Papert's 1969 book "Perceptrons" demonstrated mathematical limitations of single-layer networks, which effectively killed neural network research funding for over a decade.
Peter Steinberger
Creator of OpenClaw (originally Clawdbot/Moltbot), the open-source AI agent that went viral in January 2026. Previously spent 13 years building PSPDFKit. Joined OpenAI in February 2026 to build personal AI agents for mainstream users, calling it "the fastest way to bring this to everyone."
Prompt Injection
An attack where hidden instructions are embedded in content (emails, web pages, messages) to trick an AI agent into performing unauthorized actions. In the context of claws, prompt injection can cause your AI assistant to leak private data, execute malicious commands, or bypass safety controls without your knowledge.
S
Stable Diffusion
An open-source AI image generation model released in 2022 that uses diffusion techniques to create images from text descriptions. Its open release helped democratize AI art creation.
T
The Work of Art in the Age of Mechanical Reproduction
A 1935 essay by Walter Benjamin arguing that technologies enabling mass reproduction of art dissolve the traditional "aura" of originality and bring art closer to mass participation and political potential.
Transformer
A neural network architecture introduced in 2017 by Google researchers in the paper "Attention Is All You Need." Unlike previous approaches that processed text sequentially, Transformers can process all words in a passage simultaneously. This architecture powers virtually every major AI system today.
Turing Test
A test proposed by Alan Turing in 1950: if a machine can hold a conversation and a human cannot tell whether they are talking to a person or a computer, the machine can be considered intelligent.
U
Understanding Media: The Extensions of Man
A 1964 book by Marshall McLuhan introducing the idea that "the medium is the message." The core argument is that the real impact of any technology is not the content it produces but how it changes the scale, speed, and structure of human interaction.
V
Vibe Coding
A term for building software using AI tools by describing what you want in natural language rather than writing every line of code manually. Often dismissed as producing low-quality output, but it represents a shift in who can participate in software creation.
W
Walter Benjamin
Philosopher who wrote "The Work of Art in the Age of Mechanical Reproduction" (1935), arguing that new technologies that make creation and reproduction more accessible fundamentally shift how we assign value to art. The "aura" of originality breaks down, and art moves toward mass participation. His argument is directly relevant to the AI creation debate.
Walter Pitts
Logician who, with Warren McCulloch, proposed the first mathematical model of how neural networks could work in 1943.
Warren McCulloch
Neuroscientist who, with Walter Pitts, published the first mathematical model of neural networks in 1943. Their work is the earliest ancestor of the neural networks used in AI today.