PraisonAI π¦ β Hire a 24/7 AI Workforce. Stop writing boilerplate and start shipping autonomous agents that research, plan, and execute tasks across your apps. From one agent to an entire organization, deployed in 5 lines of code.
βββββββ βββββββ ββββββ βββββββββββ βββββββ ββββ βββ ββββββ βββ
βββββββββββββββββββββββββββββββββββββββββββββββββ βββ βββββββββββ
ββββββββββββββββββββββββββββββββββββββ βββββββββ βββ βββββββββββ
βββββββ ββββββββββββββββββββββββββββββ βββββββββββββ βββββββββββ
βββ βββ ββββββ ββββββββββββββββββββββββββ ββββββ βββ ββββββ
βββ βββ ββββββ ββββββββββββββ βββββββ βββ βββββ βββ ββββββ
pip install praisonai
* export TAVILY_API_KEY=xxxxx
- Install the lightweight core SDK:
pip install praisonaiagents
export OPENAI_API_KEY="your-api-key"- Run your first autonomous agent:
from praisonaiagents import Agent
# Give your agent a goal, and watch it work.
agent = Agent(instructions="You are a senior data analyst.")
agent.start("Analyze the top 3 tech trends of 2026 and format as a markdown table.")Start simple with the core SDK, or expand to full visual builders and dashboards when you're ready.
- Core SDK (
praisonaiagents): For pure Python development.pip install praisonaiagents - π» PraisonAI CLI (
praisonai): For terminal-based developers.pip install praisonai - π¦ Claw Dashboard: Connect agents directly to Telegram, Slack, or Discord.
pip install "praisonai[claw]" - π Flow Visual Builder: Drag-and-drop workflow creation.
pip install "praisonai[flow]" - π€ PraisonAI UI: Clean chat interface.
pip install "praisonai[ui]"
npm install praisonaifrom praisonaiagents import Agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")from praisonaiagents import Agent, Agents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)π Full MCP docs β stdio, HTTP, WebSocket, SSE transports
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")π Full tools docs β BaseTool, tool packages, 100+ built-in tools
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
agent.chat("Hello!") # Auto-persists messages, runs, tracesπ Full persistence docs β PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more
Connect your AI agents to Telegram, Discord, Slack, WhatsApp and more β all from a single command.
pip install "praisonai[claw]"
praisonai clawOpen http://localhost:8082 β the dashboard comes with 13 built-in pages: Chat, Agents, Memory, Knowledge, Channels, Guardrails, Cron, and more. Add messaging channels directly from the UI.
π Full Claw docs β platform tokens, CLI options, Docker, and YAML agent mode
Build multi-agent workflows visually with drag-and-drop components in Langflow.
pip install "praisonai[flow]"
praisonai flowOpen http://localhost:7861 β use the Agent and Agent Team components to create sequential or parallel workflows. Connect Chat Input β Agent Team β Chat Output for instant multi-agent pipelines.
π Full Flow docs β visual agent building, component reference, and deployment
Lightweight chat interface for your AI agents.
pip install "praisonai[ui]"
praisonai uiCreate agents.yaml:
framework: praisonai
topic: "Write a blog post about AI"
agents:
researcher:
role: Research Analyst
goal: Research AI trends and gather information
instructions: "Find accurate information about AI trends"
writer:
role: Content Writer
goal: Write engaging blog posts
instructions: "Write clear, engaging content based on research"Run with:
praisonai agents.yamlThe agents automatically work together sequentially
Create two files in the same folder:
agents.yaml:
framework: praisonai
topic: "Calculate the sum of 25 and 15"
agents:
calculator_agent:
role: Calculator
goal: Perform calculations
instructions: "Use the add_numbers tool to help with calculations"
tools:
- add_numberstools.py:
def add_numbers(a: float, b: float) -> float:
"""
Add two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum of a and b
"""
return a + bRun with:
praisonai agents.yamlπ‘ Tips:
- Use the function name (e.g.,
add_numbers) in the tools list, not the file name- Tools in
tools.pyare automatically discovered- The function's docstring helps the AI understand how to use it
| Category | Commands |
|---|---|
| Execution | praisonai, --auto, --interactive, --chat |
| Research | research, --query-rewrite, --deep-research |
| Planning | --planning, --planning-tools, --planning-reasoning |
| Workflows | workflow run, workflow list, workflow auto |
| Memory | memory show, memory add, memory search, memory clear |
| Knowledge | knowledge add, knowledge query, knowledge list |
| Sessions | session list, session resume, session delete |
| Tools | tools list, tools info, tools search |
| MCP | mcp list, mcp create, mcp enable |
| Development | commit, docs, checkpoint, hooks |
| Scheduling | schedule start, schedule list, schedule stop |
π Full CLI reference
π€ Core Agents
| Feature | Code | Docs |
|---|---|---|
| Single Agent | Example | π |
| Multi Agents | Example | π |
| Auto Agents | Example | π |
| Self Reflection AI Agents | Example | π |
| Reasoning AI Agents | Example | π |
| Multi Modal AI Agents | Example | π |
π Workflows
| Feature | Code | Docs |
|---|---|---|
| Simple Workflow | Example | π |
| Workflow with Agents | Example | π |
Agentic Routing (route()) |
Example | π |
Parallel Execution (parallel()) |
Example | π |
Loop over List/CSV (loop()) |
Example | π |
Evaluator-Optimizer (repeat()) |
Example | π |
| Conditional Steps | Example | π |
| Workflow Branching | Example | π |
| Workflow Early Stop | Example | π |
| Workflow Checkpoints | Example | π |
π» Code & Development
| Feature | Code | Docs |
|---|---|---|
| Code Interpreter Agents | Example | π |
| AI Code Editing Tools | Example | π |
| External Agents (All) | Example | π |
| Claude Code CLI | Example | π |
| Gemini CLI | Example | π |
| Codex CLI | Example | π |
| Cursor CLI | Example | π |
π§ Memory & Knowledge
| Feature | Code | Docs |
|---|---|---|
| Memory (Short & Long Term) | Example | π |
| File-Based Memory | Example | π |
| Claude Memory Tool | Example | π |
| Add Custom Knowledge | Example | π |
| RAG Agents | Example | π |
| Chat with PDF Agents | Example | π |
| Data Readers (PDF, DOCX, etc.) | CLI | π |
| Vector Store Selection | CLI | π |
| Retrieval Strategies | CLI | π |
| Rerankers | CLI | π |
| Index Types (Vector/Keyword/Hybrid) | CLI | π |
| Query Engines (Sub-Question, etc.) | CLI | π |
π¬ Research & Intelligence
| Feature | Code | Docs |
|---|---|---|
| Deep Research Agents | Example | π |
| Query Rewriter Agent | Example | π |
| Native Web Search | Example | π |
| Built-in Search Tools | Example | π |
| Unified Web Search | Example | π |
| Web Fetch (Anthropic) | Example | π |
π Planning & Execution
| Feature | Code | Docs |
|---|---|---|
| Planning Mode | Example | π |
| Planning Tools | Example | π |
| Planning Reasoning | Example | π |
| Prompt Chaining | Example | π |
| Evaluator Optimiser | Example | π |
| Orchestrator Workers | Example | π |
π₯ Specialized Agents
| Feature | Code | Docs |
|---|---|---|
| Data Analyst Agent | Example | π |
| Finance Agent | Example | π |
| Shopping Agent | Example | π |
| Recommendation Agent | Example | π |
| Wikipedia Agent | Example | π |
| Programming Agent | Example | π |
| Math Agents | Example | π |
| Markdown Agent | Example | π |
| Prompt Expander Agent | Example | π |
π¨ Media & Multimodal
| Feature | Code | Docs |
|---|---|---|
| Image Generation Agent | Example | π |
| Image to Text Agent | Example | π |
| Video Agent | Example | π |
| Camera Integration | Example | π |
π Protocols & Integration
| Feature | Code | Docs |
|---|---|---|
| MCP Transports | Example | π |
| WebSocket MCP | Example | π |
| MCP Security | Example | π |
| MCP Resumability | Example | π |
| MCP Config Management | Docs | π |
| LangChain Integrated Agents | Example | π |
π‘οΈ Safety & Control
| Feature | Code | Docs |
|---|---|---|
| Guardrails | Example | π |
| Human Approval | Example | π |
| Rules & Instructions | Docs | π |
βοΈ Advanced Features
| Feature | Code | Docs |
|---|---|---|
| Async & Parallel Processing | Example | π |
| Parallelisation | Example | π |
| Repetitive Agents | Example | π |
| Agent Handoffs | Example | π |
| Stateful Agents | Example | π |
| Autonomous Workflow | Example | π |
| Structured Output Agents | Example | π |
| Model Router | Example | π |
| Prompt Caching | Example | π |
| Fast Context | Example | π |
π οΈ Tools & Configuration
| Feature | Code | Docs |
|---|---|---|
| 100+ Custom Tools | Example | π |
| YAML Configuration | Example | π |
| 100+ LLM Support | Example | π |
| Callback Agents | Example | π |
| Hooks | Example | π |
| Middleware System | Example | π |
| Configurable Model | Example | π |
| Rate Limiter | Example | π |
| Injected Tool State | Example | π |
| Shadow Git Checkpoints | Example | π |
| Background Tasks | Example | π |
| Policy Engine | Example | π |
| Thinking Budgets | Example | π |
| Output Styles | Example | π |
| Context Compaction | Example | π |
π Monitoring & Management
| Feature | Code | Docs |
|---|---|---|
| Sessions Management | Example | π |
| Auto-Save Sessions | Docs | π |
| History in Context | Docs | π |
| Telemetry | Example | π |
| Project Docs (.praison/docs/) | Docs | π |
| AI Commit Messages | Docs | π |
| @Mentions in Prompts | Docs | π |
π₯οΈ CLI Features
| Feature | Code | Docs |
|---|---|---|
| Slash Commands | Example | π |
| Autonomy Modes | Example | π |
| Cost Tracking | Example | π |
| Repository Map | Example | π |
| Interactive TUI | Example | π |
| Git Integration | Example | π |
| Sandbox Execution | Example | π |
| CLI Compare | Example | π |
| Profile/Benchmark | Docs | π |
| Auto Mode | Docs | π |
| Init | Docs | π |
| File Input | Docs | π |
| Final Agent | Docs | π |
| Max Tokens | Docs | π |
π§ͺ Evaluation
| Feature | Code | Docs |
|---|---|---|
| Accuracy Evaluation | Example | π |
| Performance Evaluation | Example | π |
| Reliability Evaluation | Example | π |
| Criteria Evaluation | Example | π |
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxconst { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');PraisonAI is built for speed, with agent instantiation in under 4ΞΌs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.
| Performance Metric | PraisonAI |
|---|---|
| Avg Instantiation Time | 3.77 ΞΌs |
AI agents solving real-world problems across industries:
| Use Case | Description |
|---|---|
| π Research & Analysis | Conduct deep research, gather information, and generate insights from multiple sources automatically |
| π» Code Generation | Write, debug, and refactor code with AI agents that understand your codebase and requirements |
| βοΈ Content Creation | Generate blog posts, documentation, marketing copy, and technical writing with multi-agent teams |
| π Data Pipelines | Extract, transform, and analyze data from APIs, databases, and web sources automatically |
| π€ Customer Support | Deploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses |
| βοΈ Workflow Automation | Automate multi-step business processes with agents that hand off tasks, verify results, and self-correct |
Powered by 100+ LLMs (OpenAI, Anthropic, Gemini & local models).
View all 24 providers with examples
| Provider | Example |
|---|---|
| OpenAI | Example |
| Anthropic | Example |
| Google Gemini | Example |
| Ollama | Example |
| Groq | Example |
| DeepSeek | Example |
| xAI Grok | Example |
| Mistral | Example |
| Cohere | Example |
| Perplexity | Example |
| Fireworks | Example |
| Together AI | Example |
| OpenRouter | Example |
| HuggingFace | Example |
| Azure OpenAI | Example |
| AWS Bedrock | Example |
| Google Vertex | Example |
| Databricks | Example |
| Cloudflare | Example |
| AI21 | Example |
| Replicate | Example |
| SageMaker | Example |
| Moonshot | Example |
| vLLM | Example |
"Grok 3 customer support" β Elon Musk quoting PraisonAI's tutorial
| Feature | How | |
|---|---|---|
| π | MCP Protocol β stdio, HTTP, WebSocket, SSE | tools=MCP("npx ...") |
| π§ | Planning Mode β plan β execute β reason | planning=True |
| π | Deep Research β multi-step autonomous research | Docs |
| π€ | External Agents β orchestrate Claude Code, Gemini CLI, Codex | Docs |
| π | Agent Handoffs β seamless conversation passing | handoff=True |
| π‘οΈ | Guardrails β input/output validation | Docs |
| Web Search + Fetch β native browsing | web_search=True |
|
| πͺ | Self Reflection β agent reviews its own output | Docs |
| π | Workflow Patterns β route, parallel, loop, repeat | Docs |
| π§ | Memory (zero deps) β works out of the box | memory=True |
View all 25 features
| Feature | How | |
|---|---|---|
| π‘ | Prompt Caching β reduce latency + cost | prompt_caching=True |
| πΎ | Sessions + Auto-Save β persistent state across restarts | auto_save="my-project" |
| π | Thinking Budgets β control reasoning depth | thinking_budget=1024 |
| π | RAG + Quality-Based RAG β auto quality scoring retrieval | Docs |
| π | Model Router β auto-routes to cheapest capable model | Docs |
| π§ | Shadow Git Checkpoints β auto-rollback on failure | Docs |
| π‘ | A2A Protocol β agent-to-agent interop | Docs |
| π | Context Compaction β never hit token limits | Docs |
| π‘ | Telemetry β OpenTelemetry traces, spans, metrics | Docs |
| π | Policy Engine β declarative agent behavior control | Docs |
| π | Background Tasks β fire-and-forget agents | Docs |
| π | Doom Loop Detection β auto-recovery from stuck agents | Docs |
| πΈοΈ | Graph Memory β Neo4j-style relationship tracking | Docs |
| ποΈ | Sandbox Execution β isolated code execution | Docs |
| π₯οΈ | Bot Gateway β multi-agent routing across channels | Docs |
Learn PraisonAI through our comprehensive video series:
View all 22 video tutorials
We welcome contributions! Fork the repo, create a branch, and submit a PR β Contributing Guide.
ModuleNotFoundError: No module named 'praisonaiagents'
Install the package:
pip install praisonaiagentsAPI key not found / Authentication error
Ensure your API key is set:
export OPENAI_API_KEY=your_key_hereFor other providers, see Models docs.
How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve
# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1See Models docs for more details.
How do I persist conversations to a database?
Use the db parameter:
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)See Persistence docs for supported databases.
How do I enable agent memory?
from praisonaiagents import Agent
agent = Agent(
name="Assistant",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123"
)See Memory docs for more options.
How do I run multiple agents together?
from praisonaiagents import Agent, Agents
agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()See Agents docs for more examples.
How do I use MCP tools?
from praisonaiagents import Agent, MCP
agent = Agent(
tools=MCP("npx @modelcontextprotocol/server-memory")
)See MCP docs for all transport options.
- π Full Documentation
- π Report Issues
- π¬ Discussions
Made with β€οΈ by the PraisonAI Team
π Documentation β’
GitHub β’
























