docs(security): add initial threat model for langchain monorepo#36317
docs(security): add initial threat model for langchain monorepo#36317John Kennedy (jkennedyvz) wants to merge 1 commit intomasterfrom
Conversation
Generated by langster-threat-model (deep mode, commit 494b760). Documents 10 components, 5 trust boundaries, 8 data flows, 8 threats (2 verified, 3 likely, 1 unverified, 1 accepted, 1 partially mitigated), 7 out-of-scope patterns, and 2 investigated/dismissed findings. Key findings: - T1 (High/Verified): env var exfiltration via secrets_from_env=True; safe by default - T4 (High/Likely): API credential leakage into agent subprocess env - T5 (Medium/Likely): DNS rebinding bypass in SSRF protection for image URL token counting - T7 (Medium/Likely): symlink path traversal in FilesystemFileSearchMiddleware AI-generated with human review required. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| @@ -0,0 +1,357 @@ | |||
| # Threat Model: langchain-ai/langchain | |||
|
|
|||
| > Generated: 2026-03-27 | Commit: 494b760028 | Scope: /workspace/langchain (full monorepo) | Visibility: Open Source | Mode: deep | |||
There was a problem hiding this comment.
Let's not version these
|
This threat model is an excellent starting point and the AI-agent-specific data flows (DF6/DF7/DF8) correctly identify where trust decisions need to happen. One gap worth considering as a distinct threat: cross-organizational agent identity (T9). The current threats address the LangChain agent operating within a known trust boundary (the developer controls the environment). But production LangChain agents increasingly call external agents, tools, and APIs that they did not provision — and those external agents may impersonate legitimate counterparts. Concrete scenario: A LangChain agent delegates a subtask to an external "billing tool" agent. Current threat model covers T4 (prompt injection) and T5 (malicious tool output). But if the external agent itself presents a falsified identity — claiming to be a verified financial processor when it is not — there is no model threat for that. This is distinct from prompt injection: it is identity fraud at the agent-to-agent handshake layer. A potential T9 addition: Happy to draft formal threat entry language if this direction is useful. |
Summary
THREAT_MODEL.mdat the repo root documenting the security posture of thelangchain-ai/langchainmonorepo494b760028) covering 10 components, 5 trust boundaries, 8 data flows, 8 threats, 7 out-of-scope patterns, and 2 investigated/dismissed findingsfile:SymbolNamecode references throughout)