Skip to content

Real-world usage report: empty representations, duplicate conclusions, no filtering — feedback from credit user #557

@gueysito

Description

@gueysito

Context

I've been using Honcho (SDK v2.0.1, managed service) for about 4 weeks with a single AI agent (workspace biggie, 2 peers). The agent runs daily conversations via Telegram. Dialectic budget was recently set to 1800 chars / medium reasoning.

Sharing this as constructive feedback — I'm on credits, not a paying customer yet, and I want to help improve the product. Honcho has been genuinely useful for my setup, and I'd like to see these gaps addressed so I can expand to a second agent.

What's working well

  • Peer cards are excellent — rich, accurate, well-structured biographical/preference data for both user and AI peer
  • Conclusion generation is capturing real preferences, rules, project context, and behavioral patterns
  • Session management works cleanly (53 sessions, message storage reliable)
  • The overall concept of persistent cross-session memory is exactly what I need

What's not working

1. Empty representations (relates to #524, #494)

Despite having 762 conclusions stored, calling peer.representation("other_peer") returns an empty string for both peers. This is the primary feature I'm paying dialectic budget for — the synthesis layer that should turn raw conclusions into a coherent understanding. It's been empty since activation (~4 weeks).

yuki = client.peer("yuki")
rep = yuki.representation("biggie")
print(repr(rep))  # ''

2. Duplicate conclusions (~14% of total)

54 unique strings appear multiple times across the 762 conclusions, totaling 110 duplicate entries. Examples:

  • "biggie's home directory is /Users/Hermes" — 11x
  • "biggie dislikes overengineering and token waste" — 11x
  • "biggie wants Yuki to be a high-trust creative/technical partner" — 7x

I understand the maximalist storage philosophy (per maintainer comments on #444), but 11 copies of the same fact seems like the dreaming/consolidation system isn't keeping up.

3. No way to filter low-value conclusions

A significant portion of conclusions are infrastructure facts (file paths, B2 endpoints, log locations) that are low-value for cross-session recall. I'd love a way to guide the deriver — something like PR #430's custom instructions approach would be ideal.

What I'd find helpful

  • Guidance on whether empty representations are expected with my config, or if there's something I should change
  • Any ETA on deriver custom instructions (fix: implement deriver custom instructions #430) — that would solve the filtering problem
  • Whether schedule_dream() would help kick-start representation generation

Environment

  • SDK: honcho v2.0.1 (Python)
  • Service: managed (api.honcho.dev)
  • Config: 1 workspace, 2 active peers, hybrid memory mode, 1200 chars / medium reasoning
  • Usage: ~4 weeks, 53 sessions, 762 conclusions

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions