You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been using Honcho (SDK v2.0.1, managed service) for about 4 weeks with a single AI agent (workspace biggie, 2 peers). The agent runs daily conversations via Telegram. Dialectic budget was recently set to 1800 chars / medium reasoning.
Sharing this as constructive feedback — I'm on credits, not a paying customer yet, and I want to help improve the product. Honcho has been genuinely useful for my setup, and I'd like to see these gaps addressed so I can expand to a second agent.
What's working well
Peer cards are excellent — rich, accurate, well-structured biographical/preference data for both user and AI peer
Conclusion generation is capturing real preferences, rules, project context, and behavioral patterns
Session management works cleanly (53 sessions, message storage reliable)
The overall concept of persistent cross-session memory is exactly what I need
Despite having 762 conclusions stored, calling peer.representation("other_peer") returns an empty string for both peers. This is the primary feature I'm paying dialectic budget for — the synthesis layer that should turn raw conclusions into a coherent understanding. It's been empty since activation (~4 weeks).
54 unique strings appear multiple times across the 762 conclusions, totaling 110 duplicate entries. Examples:
"biggie's home directory is /Users/Hermes" — 11x
"biggie dislikes overengineering and token waste" — 11x
"biggie wants Yuki to be a high-trust creative/technical partner" — 7x
I understand the maximalist storage philosophy (per maintainer comments on #444), but 11 copies of the same fact seems like the dreaming/consolidation system isn't keeping up.
3. No way to filter low-value conclusions
A significant portion of conclusions are infrastructure facts (file paths, B2 endpoints, log locations) that are low-value for cross-session recall. I'd love a way to guide the deriver — something like PR #430's custom instructions approach would be ideal.
What I'd find helpful
Guidance on whether empty representations are expected with my config, or if there's something I should change
Context
I've been using Honcho (SDK v2.0.1, managed service) for about 4 weeks with a single AI agent (workspace
biggie, 2 peers). The agent runs daily conversations via Telegram. Dialectic budget was recently set to 1800 chars / medium reasoning.Sharing this as constructive feedback — I'm on credits, not a paying customer yet, and I want to help improve the product. Honcho has been genuinely useful for my setup, and I'd like to see these gaps addressed so I can expand to a second agent.
What's working well
What's not working
1. Empty representations (relates to #524, #494)
Despite having 762 conclusions stored, calling
peer.representation("other_peer")returns an empty string for both peers. This is the primary feature I'm paying dialectic budget for — the synthesis layer that should turn raw conclusions into a coherent understanding. It's been empty since activation (~4 weeks).2. Duplicate conclusions (~14% of total)
54 unique strings appear multiple times across the 762 conclusions, totaling 110 duplicate entries. Examples:
I understand the maximalist storage philosophy (per maintainer comments on #444), but 11 copies of the same fact seems like the dreaming/consolidation system isn't keeping up.
3. No way to filter low-value conclusions
A significant portion of conclusions are infrastructure facts (file paths, B2 endpoints, log locations) that are low-value for cross-session recall. I'd love a way to guide the deriver — something like PR #430's custom instructions approach would be ideal.
What I'd find helpful
schedule_dream()would help kick-start representation generationEnvironment