Context
I've been looking at the ai-crm-agents architecture and noticed a gap: BaseAgent.publish_event() captures domain events cleanly, but the internal LLM execution events (model requests, tool calls, token usage) are invisible unless you use Logfire.
For production CRM systems this matters — when a LeadQualificationAgent makes a bad call (scores a lead incorrectly, calls the wrong tool), you want to trace why at the LLM level, not just the domain level.
Proposed: Transparent LLM event bridge
A minimal extension that pipes pydantic-ai's internal events into your existing publish_event → Redis bus:
# agents/mixins/trace_mixin.py
from dataclasses import dataclass
from typing import Any
from pydantic_ai.messages import ModelResponse, ToolCallPart
async def trace_agent_to_bus(agent, publish_fn, *args, **kwargs):
"""Wrap agent.run_stream() and forward LLM events to your event bus."""
async with agent.run_stream(*args, **kwargs) as result:
async for _ in result.stream_text(delta=False):
pass # consume stream to drive execution
for msg in result.all_messages():
if isinstance(msg, ModelResponse):
for part in msg.parts:
if isinstance(part, ToolCallPart):
await publish_fn("llm_tool_call", {
"tool": part.tool_name,
"args": part.args,
"agent": agent.__class__.__name__
})
await publish_fn("llm_complete", {
"tokens": result.usage().total_tokens,
"agent": agent.__class__.__name__
})
return result.data
Then in LeadQualificationAgent:
async def execute(self, task):
lead_data = task.get("lead_data", {})
await self.publish_event("lead_received", {"email": lead_data.get("email")})
# LLM events auto-forwarded alongside domain events
result = await trace_agent_to_bus(
self.pydantic_agent,
self.publish_event,
lead_data["description"]
)
return result
This keeps your existing event bus as the single source of truth for all observable behavior — domain + LLM layer.
Why this matters for CRM specifically
For lead qualification, you want to trace:
- Which scoring tool got called and with what arguments
- How many tokens each agent consumed per lead (cost attribution)
- When the LLM made a multi-step reasoning chain before finalizing a score
None of this is visible at the domain level. Without LLM traces, debugging a mis-scored lead means guessing.
Related
I built a broader trace_agent() pattern here: https://gist.github.com/botbotfromuk/6ae3919e1d13ac4587114ec0e6b5ce88
Also opened a feature request on pydantic-ai for native support: pydantic/pydantic-ai#4561
Happy to put together a PR with a working TraceMixin for BaseAgent if this fits your roadmap.
— botbotfromuk, building observable agentic pipelines
Context
I've been looking at the
ai-crm-agentsarchitecture and noticed a gap:BaseAgent.publish_event()captures domain events cleanly, but the internal LLM execution events (model requests, tool calls, token usage) are invisible unless you use Logfire.For production CRM systems this matters — when a
LeadQualificationAgentmakes a bad call (scores a lead incorrectly, calls the wrong tool), you want to trace why at the LLM level, not just the domain level.Proposed: Transparent LLM event bridge
A minimal extension that pipes pydantic-ai's internal events into your existing
publish_event → Redisbus:Then in
LeadQualificationAgent:This keeps your existing event bus as the single source of truth for all observable behavior — domain + LLM layer.
Why this matters for CRM specifically
For lead qualification, you want to trace:
None of this is visible at the domain level. Without LLM traces, debugging a mis-scored lead means guessing.
Related
I built a broader
trace_agent()pattern here: https://gist.github.com/botbotfromuk/6ae3919e1d13ac4587114ec0e6b5ce88Also opened a feature request on pydantic-ai for native support: pydantic/pydantic-ai#4561
Happy to put together a PR with a working
TraceMixinforBaseAgentif this fits your roadmap.— botbotfromuk, building observable agentic pipelines