Describe the bug
When using AsyncAnthropicVertex (Vertex AI) as the Anthropic provider, the deriver and dream services intermittently receive responses where anthropic_response.content is None or empty. This causes NoneType crashes throughout the tool-calling loop in _execute_tool_loop and in process_representation_tasks_batch.
The same API calls work correctly when made directly via the AnthropicVertex SDK outside of Honcho.
Is this a regression?
Unknown — this is a new deployment using Vertex AI, which isn't currently supported upstream (see PR #535).
To reproduce
- Configure Honcho with
AsyncAnthropicVertex as the Anthropic provider (via ANTHROPIC_VERTEX_PROJECT_ID + CLOUD_ML_REGION env vars)
- Start the deriver
- Let it process messages —
process_representation_tasks_batch crashes at response.content (line ~164 in deriver.py)
- Trigger a dream via
POST /v3/workspaces/{id}/schedule_dream — the dream specialist's tool loop gets None from call_func() on every iteration
Expected behaviour
Responses from AsyncAnthropicVertex should have non-null content and usage, same as AsyncAnthropic.
Error traces
Deriver crash (deriver.py:164):
AttributeError: 'NoneType' object has no attribute 'content'
Tool loop crash (clients.py:797):
total_cache_read_tokens += response.cache_read_input_tokens
AttributeError: 'NoneType' object has no attribute 'input_tokens'
Dream crash after adding guards:
ValueError: LLM call returned None after 12 attempts
Root cause analysis
The Anthropic handler in honcho_llm_call_inner (clients.py ~line 1774) returns None when anthropic_response.content is empty. The callers (_execute_tool_loop, process_representation_tasks_batch, dream specialists) don't handle None returns.
The _execute_tool_loop function (clients.py ~line 791) assumes call_func() always returns a non-null HonchoLLMCallResponse. When it doesn't, the token accumulation lines crash.
Suggested fix
Add null guards at:
clients.py:~791 — check response is None after call_func(), retry or raise
clients.py:~1778 — log and return None when Anthropic content is empty (already returns None implicitly for some code paths)
deriver.py:~154 — guard response and response.content before accessing
Your environment
- OS: macOS (Docker containers running Honcho)
- Honcho Server Version: latest main (commit from April 2026)
- Anthropic SDK: 0.83.0 with
[vertex] extras
- Provider:
AsyncAnthropicVertex with region=global
- Models: claude-haiku-4-5, claude-sonnet-4-6
Additional context
Direct API calls to AnthropicVertex work correctly (including tool calling). The issue only manifests inside Honcho's tool loop, suggesting a message formatting or state accumulation issue in multi-turn tool conversations via the Vertex endpoint. Related: PR #535 adds Vertex AI support.
Describe the bug
When using
AsyncAnthropicVertex(Vertex AI) as the Anthropic provider, the deriver and dream services intermittently receive responses whereanthropic_response.contentisNoneor empty. This causesNoneTypecrashes throughout the tool-calling loop in_execute_tool_loopand inprocess_representation_tasks_batch.The same API calls work correctly when made directly via the
AnthropicVertexSDK outside of Honcho.Is this a regression?
Unknown — this is a new deployment using Vertex AI, which isn't currently supported upstream (see PR #535).
To reproduce
AsyncAnthropicVertexas the Anthropic provider (viaANTHROPIC_VERTEX_PROJECT_ID+CLOUD_ML_REGIONenv vars)process_representation_tasks_batchcrashes atresponse.content(line ~164 in deriver.py)POST /v3/workspaces/{id}/schedule_dream— the dream specialist's tool loop getsNonefromcall_func()on every iterationExpected behaviour
Responses from
AsyncAnthropicVertexshould have non-nullcontentandusage, same asAsyncAnthropic.Error traces
Deriver crash (deriver.py:164):
Tool loop crash (clients.py:797):
Dream crash after adding guards:
Root cause analysis
The Anthropic handler in
honcho_llm_call_inner(clients.py ~line 1774) returnsNonewhenanthropic_response.contentis empty. The callers (_execute_tool_loop,process_representation_tasks_batch, dream specialists) don't handleNonereturns.The
_execute_tool_loopfunction (clients.py ~line 791) assumescall_func()always returns a non-nullHonchoLLMCallResponse. When it doesn't, the token accumulation lines crash.Suggested fix
Add null guards at:
clients.py:~791— checkresponse is Noneaftercall_func(), retry or raiseclients.py:~1778— log and returnNonewhen Anthropic content is empty (already returnsNoneimplicitly for some code paths)deriver.py:~154— guardresponseandresponse.contentbefore accessingYour environment
[vertex]extrasAsyncAnthropicVertexwithregion=globalAdditional context
Direct API calls to
AnthropicVertexwork correctly (including tool calling). The issue only manifests inside Honcho's tool loop, suggesting a message formatting or state accumulation issue in multi-turn tool conversations via the Vertex endpoint. Related: PR #535 adds Vertex AI support.