Claude
What is Claude? Anthropic's family of large language models built for safety and helpfulness.
Claude — AI Glossary
Claude is Anthropic's family of large language models designed to be helpful, harmless, and honest. Available through API and consumer products (claude.ai, Claude mobile apps), Claude powers conversations, analysis, code generation, and agentic workflows across millions of users and enterprises. The current generation includes Claude Opus, Sonnet, and Haiku models at different capability and cost tiers.
Why Claude Matters
Claude is one of the leading frontier model families competing directly with OpenAI's GPT series and Google's Gemini. Anthropic's focus on Constitutional AI and safety research differentiates Claude from competitors — the models are trained to refuse harmful requests while remaining maximally useful for legitimate tasks.
For developers, Claude's extended context windows (up to 200K tokens), strong instruction-following, and tool use capabilities make it a foundation for building AI-powered applications. Anthropic has expanded Claude's reach beyond chat into agentic desktop workflows and persistent memory features that maintain context across conversations.
How Claude Works
Claude is built on transformer architecture and trained using Reinforcement Learning from Human Feedback (RLHF) combined with Anthropic's Constitutional AI approach, where the model is guided by a set of written principles rather than relying solely on human labelers.
Key characteristics:
- Model tiers: Opus (highest capability), Sonnet (balanced), and Haiku (fastest and cheapest) serve different use cases
- Extended context: Supports up to 200K input tokens, enabling analysis of large documents and codebases
- Tool use: Native function-calling support lets Claude interact with external APIs, databases, and services
- Vision: Processes images alongside text for multimodal reasoning
Related Terms
- Anthropic: The AI safety company behind Claude, founded by former OpenAI researchers
- Constitutional AI: Anthropic's training methodology that aligns Claude's behavior using written principles
- RLHF: Reinforcement Learning from Human Feedback, a core technique used in Claude's training pipeline
Want more AI insights? Subscribe to LoreAI for daily briefings.