Skip to content

feat(agent): implement live mode with LLM tool execution#126

Open
KhairallahA wants to merge 15 commits intokeep-starknet-strange:mainfrom
KhairallahA:feat/agent-live-runtime
Open

feat(agent): implement live mode with LLM tool execution#126
KhairallahA wants to merge 15 commits intokeep-starknet-strange:mainfrom
KhairallahA:feat/agent-live-runtime

Conversation

@KhairallahA
Copy link
Copy Markdown
Contributor

@KhairallahA KhairallahA commented Feb 17, 2026

feat(agent): implement live mode with LLM tool execution

  • Add useAgentChatLive hook for real LLM streaming with stored API key
  • Parse tool_calls from OpenAI SSE streaming response
  • Execute tools via registry (get_balances, prepare_transfer,
    estimate_fee, execute_transfer) and send results back to LLM
  • Integrate live chat into Agent tab UI with streaming indicators
  • Add ToolCallCard component for displaying tool execution status
  • Preserve demo mode unchanged

TypeScript types updated:

  • Add tool_call chunk type and tool role to ChatMessage
  • Add tools option to StreamOptions

Summary by CodeRabbit

  • New Features

    • Live chat mode with streaming responses, tool execution, and inline tool-call cards
    • Typing indicator for streaming messages
    • Sanitized display of sensitive tool inputs/results
    • Visible API-key readiness and clearer live-mode error display
  • Improvements

    • Unified chat state and rendering for demo and live modes
    • Transfer/send controls disabled and styled while assistant is responding
    • More robust tool-call streaming, buffering, and handling

Greptile Summary

This PR implements live mode LLM streaming with tool execution for the agent chat interface. The implementation correctly addresses most security concerns from previous reviews:

What Changed:

  • Added useAgentChatLive hook with real OpenAI streaming and tool execution
  • Implemented streaming tool call argument buffering in OpenAI adapter
  • Added authorization checks blocking prepare_transfer and execute_transfer
  • Integrated live/demo mode toggle in Agent UI with tool call status cards
  • Added proper sanitization for sensitive data display

Key Security Controls:

  • Only get_balances and estimate_fee are auto-executable
  • Write operations (execute_transfer) correctly blocked with error message
  • Tool execution goes through registry with schema validation and audit logging
  • Proper circular reference handling in tool result serialization
  • API keys never exposed in logs or error messages

Issues Found:

  • prepare_transfer is classified as requiring manual approval, but it's a read-only validation operation that doesn't execute transfers. This makes the agent unable to help users plan transactions effectively, severely limiting live mode UX compared to demo mode.

Architecture Quality:
The streaming implementation is well-architected with proper buffering, clean state management, and correct message flow. Tool call arguments are accumulated across SSE chunks and flushed at terminal events. The dual LLM call pattern (initial request → tool execution → follow-up with results) is correctly implemented.

Confidence Score: 4/5

  • Safe to merge with one UX limitation - agent functionality is restricted but no security vulnerabilities present
  • The implementation correctly enforces security boundaries and blocks write operations. Previous critical issues (tool argument buffering, message duplication, schema mismatches) have been resolved. The one remaining issue is that prepare_transfer is overly restricted - it's a read-only validation tool that should be auto-executable for agent usability, but blocking it doesn't create a security risk, only a UX limitation.
  • apps/mobile/lib/agent/use-agent-chat.ts - Review the AUTO_EXECUTABLE_TOOLS configuration to determine if prepare_transfer should be unblocked for better agent UX

Important Files Changed

Filename Overview
apps/mobile/lib/agent/use-agent-chat.ts Implements live LLM streaming with tool execution. Properly blocks write operations, but prepare_transfer should be auto-executable for UX
apps/mobile/lib/agent-runtime/openai-adapter.ts Correctly implements streaming tool call argument accumulation and proper buffering. Well-architected with clean separation of concerns
apps/mobile/app/(tabs)/agent.tsx Integrates live/demo chat modes with UI. Proper sanitization of sensitive data in tool call display

Sequence Diagram

sequenceDiagram
    participant User
    participant AgentUI as Agent UI
    participant LiveChat as useAgentChatLive
    participant OpenAI as OpenAI Adapter
    participant LLM as OpenAI API
    participant Registry as Tool Registry
    participant Tools as Core Tools

    User->>AgentUI: Send message
    AgentUI->>LiveChat: sendMessage(text)
    LiveChat->>LiveChat: Add user message to state
    LiveChat->>OpenAI: streamChat(messages, tools)
    OpenAI->>LLM: POST /chat/completions (SSE)
    
    loop Streaming Response
        LLM-->>OpenAI: SSE chunk (delta/tool_call)
        OpenAI->>OpenAI: Buffer tool call arguments
        OpenAI-->>LiveChat: StreamChunk
        alt Text Delta
            LiveChat->>LiveChat: Append to assistant message
            LiveChat->>AgentUI: Update UI (streaming)
        else Tool Call
            LiveChat->>LiveChat: Add to currentToolCalls[]
        end
    end
    
    LLM-->>OpenAI: [DONE]
    OpenAI->>OpenAI: Flush pending tool calls
    OpenAI-->>LiveChat: done chunk
    
    alt Tool Calls Present
        loop For each tool call
            LiveChat->>LiveChat: Check AUTO_EXECUTABLE_TOOLS
            alt Auto-executable
                LiveChat->>Registry: executeTool(name, args)
                Registry->>Registry: Validate args schema
                Registry->>Tools: handler(args)
                Tools-->>Registry: ToolResult
                Registry->>Registry: Append audit log
                Registry-->>LiveChat: ToolResult
            else Blocked
                LiveChat->>LiveChat: Return "Blocked" message
            end
            LiveChat->>AgentUI: Update tool call status
        end
        
        LiveChat->>LiveChat: Build LLM messages with tool results
        LiveChat->>OpenAI: streamChat(messages + tool results)
        OpenAI->>LLM: POST /chat/completions (SSE)
        
        loop Final Response
            LLM-->>OpenAI: SSE chunk
            OpenAI-->>LiveChat: StreamChunk
            LiveChat->>AgentUI: Update final assistant message
        end
    end
    
    LiveChat->>LiveChat: Mark streaming complete
    LiveChat->>AgentUI: Update UI (complete)
Loading

Last reviewed commit: bae8287

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds live streaming agent support with tool-call parsing/execution: SSE parser emits tool_call chunks, runtime types and provider accept tools, a new live chat hook orchestrates streaming and tool execution, demo state and UI updated to render streaming, tool-call cards, and live-mode errors.

Changes

Cohort / File(s) Summary
Agent UI
apps/mobile/app/(tabs)/agent.tsx
Unifies demo/live hooks, selects chatState/chatActions by live mode, surfaces chatState.error, renders ToolCallCard, passes optional isStreaming to MessageBubble, adjusts transfer/send button and input routing for live vs demo. Public: MessageBubble now accepts isStreaming?: boolean.
OpenAI SSE adapter
apps/mobile/lib/agent-runtime/openai-adapter.ts
Introduces per-session pending tool-call buffering, flushPendingToolCalls, changes parseSseLine to accept pendingToolCalls and return StreamChunk[], emits { type: "tool_call" } chunks, and includes tools in streamed request body.
Runtime types
apps/mobile/lib/agent-runtime/types.ts
Adds tool role, toolCallId? on messages, new ParsedToolCall and tool_call StreamChunk variant, OpenAITool type, and tools?: OpenAITool[] on StreamOptions.
Live chat runtime
apps/mobile/lib/agent/use-agent-chat.ts
Adds useAgentChatLive implementing streaming flow: API-key checks, provider creation, handling delta/tool_call/done chunks, collecting/executing tool calls, injecting tool results and continuing streaming. AgentChatState gains hasApiKey and error.
Demo state
apps/mobile/lib/demo/demo-state.ts
Demo messages gain optional isStreaming?: boolean to match live message shape.
Manifest / deps
manifest_file, package.json
Updates manifest and package metadata (lines changed indicate dependency/manifest adjustments).

Sequence Diagram

sequenceDiagram
    participant UI as Agent UI
    participant Hook as useAgentChatLive
    participant Provider as OpenAI Provider
    participant Tools as Tool Executor
    participant State as Chat State

    UI->>Hook: sendMessage(userMessage)
    activate Hook
    Hook->>State: set isResponding, append user message
    Hook->>Provider: stream(initialMessages, tools)
    activate Provider

    Provider-->>Hook: tool_call (id,name,args)
    Hook->>State: add toolCall entry
    Hook->>Tools: executeToolCall(toolCall)
    activate Tools
    Tools-->>Hook: toolResult
    deactivate Tools
    Hook->>State: append tool result as message

    Provider-->>Hook: text delta
    Hook->>State: update assistant message (isStreaming=true)
    Provider-->>Hook: done
    deactivate Provider

    Hook->>Provider: stream(messagesWithToolResults, tools)
    activate Provider
    Provider-->>Hook: final response deltas
    Provider-->>Hook: done
    deactivate Provider

    Hook->>State: set isResponding(false)
    deactivate Hook
    UI->>State: read messages, toolCalls, error
    UI->>UI: render MessageBubble(isStreaming), ToolCallCard, error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs


Caution

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

  • Ignore

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Cross-Repo Boundary Awareness ❌ Error PR introduces public API changes to types.ts, use-agent-chat.ts, and agent.tsx but does not acknowledge impact on counterpart repos starknet-agentic or SISNA. Update PR description to acknowledge whether counterpart repos are affected and provide guidance on required updates or confirm no dependency exists.
Docstring Coverage ⚠️ Warning Docstring coverage is 61.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Spec Impact Declaration ✅ Passed PR modifies only files in apps/mobile/app/, apps/mobile/lib/agent/, apps/mobile/lib/agent-runtime/, and apps/mobile/lib/demo/ directories with no changes to boundary files.
Security Rationale For Signer Changes ✅ Passed Pull request includes comprehensive security rationale for both signer and contracts modules with threat model documentation and security controls.
Title check ✅ Passed The title clearly and specifically describes the main change: implementing a live mode with LLM tool execution, which is the primary feature added across multiple files in this changeset.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Feb 17, 2026

Greptile Summary

This PR adds live LLM chat with tool execution to the Agent tab, introducing useAgentChatLive for real OpenAI streaming alongside the existing demo mode. It parses tool calls from SSE chunks, executes them via the tool registry, and feeds results back to the LLM for a follow-up response. The UI gains a ToolCallCard component and streaming indicators.

However, the tool-call flow is non-functional due to several compounding bugs:

  • Tools never sent to API: streamChat in openai-adapter.ts accepts opts.tools but never includes it in the request body — the LLM will never know tools are available and will never produce tool calls.
  • Tool message mapping broken: The message serializer strips toolCallId from tool-role messages, and assistant messages lack the required tool_calls array — the follow-up API call with tool results will fail with a 400 error.
  • Incomplete argument streaming: Only the first SSE chunk of a tool call (containing id + name) is parsed; subsequent argument fragments are silently dropped, yielding empty or partial arguments.
  • User messages duplicated: The streaming state update logic (slice(0, -1) + re-append) duplicates the user message on every delta chunk, corrupting the conversation display.
  • Stale closure on messages: sendMessage captures state.messages at call time, missing the just-added user message when building the LLM context.

Confidence Score: 1/5

  • The live tool-call flow will not work in production due to multiple compounding bugs in the adapter and state management layers.
  • Score of 1 reflects that while the types and UI shell are reasonable, the core feature (LLM tool execution) has critical bugs at every layer: tools aren't sent to the API, tool messages are malformed, streaming argument accumulation is incomplete, and the React state updates corrupt the conversation. The demo mode is unaffected, but the new live mode code path is non-functional.
  • apps/mobile/lib/agent-runtime/openai-adapter.ts (tools not passed, messages lose tool_call_id, argument streaming incomplete) and apps/mobile/lib/agent/use-agent-chat.ts (message duplication, stale closure)

Important Files Changed

Filename Overview
apps/mobile/lib/agent-runtime/openai-adapter.ts Critical: tools never included in API request body (dead code); tool message mapping strips required tool_call_id; streaming tool call argument accumulation is incomplete — only the first chunk is parsed.
apps/mobile/lib/agent/use-agent-chat.ts User messages duplicated on every streaming delta due to slice-and-append logic; stale closure captures state.messages for LLM context; unused toolCallBuffer variable.
apps/mobile/lib/agent-runtime/types.ts Clean type additions: tool role, ParsedToolCall, toolCallId on ChatMessage, and tools on StreamOptions. No issues found.
apps/mobile/app/(tabs)/agent.tsx UI integration is mostly clean — hooks called unconditionally, mode switching works. Minor: (m as any).isStreaming bypasses types; quick prompts still use actions.sendAgentMessage instead of chatActions.sendMessage even in live mode.

Sequence Diagram

sequenceDiagram
    participant User
    participant AgentUI as Agent Tab UI
    participant Hook as useAgentChatLive
    participant Adapter as OpenAI Adapter
    participant API as OpenAI API
    participant Registry as Tool Registry

    User->>AgentUI: Type message & send
    AgentUI->>Hook: sendMessage(text)
    Hook->>Hook: Add user msg to state
    Hook->>Adapter: streamChat(messages, tools)
    Note over Adapter: BUG: tools not included in body
    Adapter->>API: POST /chat/completions (SSE)
    API-->>Adapter: delta chunks (text)
    Adapter-->>Hook: StreamChunk{type: "delta"}
    Hook->>Hook: Update assistant message
    Note over Hook: BUG: user msg duplicated each delta
    API-->>Adapter: tool_call chunks (streamed)
    Note over Adapter: BUG: only first chunk parsed
    Adapter-->>Hook: StreamChunk{type: "tool_call"}
    Hook->>Registry: executeTool(name, args)
    Registry-->>Hook: ToolResult
    Hook->>Adapter: streamChat(msgs + tool results)
    Note over Adapter: BUG: tool_call_id stripped
    Adapter->>API: POST /chat/completions
    API-->>Adapter: delta chunks (final answer)
    Adapter-->>Hook: StreamChunk{type: "delta"}
    Hook->>AgentUI: Update state (messages, toolCalls)
    AgentUI->>User: Render conversation + ToolCallCards
Loading

Last reviewed commit: f2b15b8

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Feb 17, 2026

Additional Comments (2)

apps/mobile/lib/agent-runtime/openai-adapter.ts
Tools never sent to OpenAI API

The tools field was added to StreamOptions (types.ts:44) and callLlm passes it (use-agent-chat.ts:167), but streamChat here never includes opts.tools in the request body. The LLM will never receive tool definitions, so it cannot make tool calls — rendering the entire tool-call flow dead code in production.

      const body: Record<string, unknown> = {
        model,
        messages,
        stream: true,
      };
      if (opts.tools && opts.tools.length > 0) body.tools = opts.tools;
      if (opts.maxTokens != null) body.max_tokens = opts.maxTokens;
      if (opts.temperature != null) body.temperature = opts.temperature;

apps/mobile/lib/agent-runtime/openai-adapter.ts
Tool messages lose required tool_call_id

This mapping strips toolCallId from tool-role messages. The OpenAI API requires tool_call_id on messages with role: "tool" — without it, the follow-up call with tool results will return a 400 error. Additionally, assistant messages that triggered tool calls need a tool_calls array in the API payload.

        ...opts.messages.map((m) => {
          const mapped: Record<string, unknown> = { role: m.role, content: m.content };
          if (m.toolCallId) mapped.tool_call_id = m.toolCallId;
          return mapped;
        }),

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/mobile/lib/agent-runtime/openai-adapter.ts (1)

192-198: ⚠️ Potential issue | 🔴 Critical

Tools are not passed to OpenAI API — tool calling will not work.

The opts.tools field is defined in StreamOptions but never included in the request body. OpenAI requires the tools parameter for function calling to work.

🐛 Proposed fix to include tools in request body
 const body: Record<string, unknown> = {
   model,
   messages,
   stream: true,
 };
 if (opts.maxTokens != null) body.max_tokens = opts.maxTokens;
 if (opts.temperature != null) body.temperature = opts.temperature;
+if (opts.tools && opts.tools.length > 0) body.tools = opts.tools;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/mobile/lib/agent-runtime/openai-adapter.ts` around lines 192 - 198, The
request body built in openai-adapter.ts for the streaming call omits the tools
parameter, so function/tool calling won't work; update the code that populates
the body (the const body: Record<string, unknown> = {...} block) to include
opts.tools when present (e.g., if (opts.tools != null) body.tools = opts.tools)
and ensure the shape matches StreamOptions.tools so the OpenAI API receives the
tools array for function calling; reference the variables body and opts.tools in
your change and keep the existing conditional pattern used for max_tokens and
temperature.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 317-319: The code casts (m as any).isStreaming when rendering
MessageBubble because state.agent.messages lacks isStreaming while
chatState.messages (live) includes it; replace the unsafe cast by unifying or
narrowing the message type: update the shared message type (e.g., make Message
have optional isStreaming?: boolean) or create a discriminated union/ type guard
(e.g., isLiveMessage(m): m is LiveChatMessage) and use it in the map so you can
pass m.isStreaming safely to MessageBubble; update the definitions used by
state.agent.messages and chatState.messages or add a type guard function and use
it in the map where isLive is checked to ensure type-safe access to isStreaming.

In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 46-64: The loop over toolCalls currently returns on the first
match which drops subsequent tool calls; update the logic in the function that
processes toolCalls to produce all tool_call chunks instead of returning
immediately: iterate over toolCalls, for each tc with tc?.id and
tc?.function?.name parse tc.function.arguments with a try/catch (defaulting to
{} on parse error) and push a tool_call object ({ type: "tool_call", toolCall: {
id: tc.id, name: tc.function.name, arguments: args } }) into a results array,
then return that array (or change the function signature to stream/emit each
chunk individually) so every tool call in the same SSE event is handled rather
than only the first.
- Around line 48-53: The streaming handler currently JSON.parse's each
tc.function.arguments fragment into args (letting failures fall back to {}),
which breaks tool execution for incremental fragments; instead accumulate
fragments per tool_calls[index] (use the existing toolCallBuffer from
use-agent-chat.ts keyed by tool call index), concatenate incoming
tc.function.arguments pieces, attempt JSON.parse only when the combined string
forms valid JSON (or when tc.finish_reason indicates completion), and emit the
tool call with the parsed args only once the arguments are complete; update the
code paths that reference args, tc.function.arguments, and finish_reason so
parsing/validation happens after accumulation rather than per-chunk.

In `@apps/mobile/lib/agent-runtime/types.ts`:
- Around line 44-45: Replace the weak tools?: unknown[] with a concrete shared
type by extracting a ToolDefinition (or similar) interface that matches the tool
schema used in use-agent-chat.ts (the tool schema defined in use-agent-chat.ts
lines ~171-232); then import or declare that ToolDefinition in
apps/mobile/lib/agent-runtime/types.ts and change the property to tools?:
ToolDefinition[] (or readonly ToolDefinition[] | undefined) so callers get
proper typing for tool name, inputs, and metadata.

In `@apps/mobile/lib/agent/use-agent-chat.ts`:
- Around line 267-274: The toolCallEntry creation hardcodes operationType to
"write", causing read-only tools like get_balances to be misclassified; update
the logic around the ToolCall construction (toolCallEntry) to derive
operationType from the tool metadata or name (e.g., inspect tc.name or a
tc.metadata/flags field if available) and set operationType to "read" for
read-only tools (like "get_balances") and "write" otherwise, ensuring existing
fields id, toolName, params, timestamp, status remain unchanged; if a canonical
source of operation type exists (ToolCall type or tc metadata), use that instead
of name-based heuristics.
- Around line 247-260: The streaming update logic currently slices off the last
message and re-inserts userMsg which causes duplicate user messages; instead,
generate and store the assistant message id outside of setState (e.g., const
assistantId = `msg-${Date.now()}-assistant`) then in setState update only the
existing assistant message by mapping s.messages and replacing the message with
id === assistantId (updating text, isStreaming, createdAt) rather than slicing
and re-pushing userMsg; apply this same pattern to both streaming loops (the
first setState that builds messages with userMsg/fullText and the second
streaming loop that performs similar slicing) so updates modify the assistant
message in-place without duplicating userMsg.

---

Outside diff comments:
In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 192-198: The request body built in openai-adapter.ts for the
streaming call omits the tools parameter, so function/tool calling won't work;
update the code that populates the body (the const body: Record<string, unknown>
= {...} block) to include opts.tools when present (e.g., if (opts.tools != null)
body.tools = opts.tools) and ensure the shape matches StreamOptions.tools so the
OpenAI API receives the tools array for function calling; reference the
variables body and opts.tools in your change and keep the existing conditional
pattern used for max_tokens and temperature.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/mobile/app/(tabs)/agent.tsx (1)

392-398: 🧹 Nitpick | 🔵 Trivial

Consider adding error handling for async send operations.

chatActions.sendMessage(text) is awaited but has no try/catch. If the LLM request fails, the error would be unhandled at this level. While chatState.error may capture it internally, a local catch could provide haptic feedback on failure for better UX consistency (similar to the approval flow above).

♻️ Optional: Add error handling with haptic feedback
                   } else if (isLive) {
                     // Live mode: send to LLM chat
-                    await chatActions.sendMessage(text);
+                    try {
+                      await chatActions.sendMessage(text);
+                    } catch {
+                      await haptic("warn");
+                    }
                   } else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/mobile/app/`(tabs)/agent.tsx around lines 392 - 398, The live-path
awaits chatActions.sendMessage(text) without local error handling; wrap that
await in a try/catch inside the isLive branch (where chatActions.sendMessage is
called) and on catch trigger the same haptic/error UI behavior used in the
approval flow (e.g., call the existing haptic/error helper or emit a failure
feedback and set a local error state) so failures are surfaced to the user while
non-live paths continue to use actions.sendAgentMessage(text).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 429-436: The UI is directly rendering
JSON.stringify(toolCall.params) and toolCall.result which may leak sensitive
wallet data; update the rendering in the component that shows toolCall (the
Muted lines referencing toolCall.params and toolCall.result) to call a sanitizer
before displaying: implement a helper like sanitizeToolCall or
redactSensitiveFields that accepts toolCall.params/result, removes or masks keys
such as private keys, addresses, accountIds, balances, txHashes, or any
configured blacklist (or alternatively whitelist safe keys), and returns a
truncated string for display; replace direct JSON.stringify/String usage with
the sanitized, truncated output to ensure secrets are filtered before rendering.
- Around line 321-327: The catch-all error handler currently puts raw exception
text into the UI via setState (setting isResponding and error), which can leak
secrets; replace that assignment so the UI error string is a generic,
non-sensitive message (e.g., "An unexpected error occurred. Please try again.")
while still marking isResponding false, and send the raw err to a secure logger
(or console/error reporting) instead of the UI. Update the catch block that
calls setState(...) (the same state that feeds chatState.error) so only the
sanitized message is stored and rendered, and ensure the original err is logged
only to an internal logger/console.error for troubleshooting.

---

Outside diff comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 392-398: The live-path awaits chatActions.sendMessage(text)
without local error handling; wrap that await in a try/catch inside the isLive
branch (where chatActions.sendMessage is called) and on catch trigger the same
haptic/error UI behavior used in the approval flow (e.g., call the existing
haptic/error helper or emit a failure feedback and set a local error state) so
failures are surfaced to the user while non-live paths continue to use
actions.sendAgentMessage(text).

---

Duplicate comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 317-319: The map currently passes m.isStreaming to <MessageBubble>
but DemoAgentMessage doesn't define isStreaming; add a type guard when rendering
messages in the isLive ? chatState.messages : state.agent.messages mapping to
check whether the message object has the isStreaming property (e.g., typeof (m
as any).isStreaming !== "undefined" or a user-defined isStreamingMessage(m)
predicate) and only pass isStreaming to MessageBubble when present, otherwise
pass false/omit the prop, updating the mapping around the MessageBubble
instantiation to use that guard so TypeScript no longer errors.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 39-44: The current SENSITIVE_KEYS array wrongly treats
wallet-visible fields ("address", "addresses", "balance", "balances") as
secrets; update the constants to separate true secrets from user-visible wallet
data by creating a new SENSITIVE_SECRET_KEYS list that contains only
privateKey/private_key/secret/apiKey/api_key/key/secretKey/mnemonic/seed/password/token
etc., and remove "address"/"addresses"/"balance"/"balances" from that list; then
update any redaction logic (where SENSITIVE_KEYS is referenced—e.g., redaction
utility or the code path that redacts tool call results like get_balances) to
accept a context flag (e.g., isWalletContext or redactSecretsOnly) and use
SENSITIVE_SECRET_KEYS when in wallet UI so balances and addresses remain visible
while still redacting true secrets.
- Around line 89-92: The current hasIsStreaming implementation is too
permissive; change it into a proper type guard by updating its signature to
return a type predicate (e.g., m is { isStreaming: boolean }) and accept a
broader input type (like unknown) so it can safely narrow arbitrary values;
inside hasIsStreaming, check that m is an object (not null) and that typeof (m
as any).isStreaming === "boolean" before returning true, referencing the
hasIsStreaming function to locate and replace the existing implementation.

In `@apps/mobile/lib/agent/use-agent-chat.ts`:
- Around line 325-336: The variable assistantMsg declared as type LlmMessage
shadows an earlier assistantMsg of type ChatMessage (declared around the prior
block), reducing clarity; rename the LlmMessage variable (and any local uses) to
a distinct name such as toolContextMsg (or toolAssistantMsg) where it's created
and where it's consumed, and update related references like toolResultMessages
mapping and any code that pushes or reads this message so the original
ChatMessage assistantMsg remains unshadowed.
- Around line 177-184: The helper callLlm currently accepts a tools parameter
but ignores it — it always passes toolDefs into provider.streamChat; update
callLlm to use the provided tools when present (e.g., pass tools ?? toolDefs or
tools.length ? tools : toolDefs) so provider.streamChat receives the
caller-supplied tool list, or if callers never need tools remove the unused
tools parameter from callLlm and all its call sites; references: callLlm,
provider.streamChat, tools, toolDefs.
- Around line 254-256: The variable toolCallBuffer is declared but never used;
remove the unused declaration to clean up dead code in use-agent-chat.ts where
fullText, currentToolCalls (type ParsedToolCall[]) and toolCallBuffer are
declared, or if it was meant to accumulate tool-call args implement its usage by
piping token/argument text into toolCallBuffer and flushing into
currentToolCalls where the adapter/argument accumulation logic is handled;
either delete the toolCallBuffer declaration or wire it into the existing
parsing flow (functions/methods that build tool calls and finalize
ParsedToolCall entries) so it is referenced.

---

Duplicate comments:
In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 48-87: The current loop over toolCalls emits a tool_call for each
chunk even when tc.function.arguments are fragmented; change to accumulate
argument fragments per tool call (use a Map keyed by tc.id or the tool_calls
index) by appending tc.function.arguments fragments into a buffer for that id,
attempt JSON.parse only when the buffer parses successfully, and only push a
single results entry for that id (or mark it emitted) once parse succeeds or
when finish_reason indicates the stream is complete; update the logic around
toolCalls, tc.function.arguments, results and use finish_reason to
force-finalize parsing and emit or surface a parse error instead of emitting
empty args.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/mobile/app/(tabs)/agent.tsx (1)

453-470: ⚠️ Potential issue | 🟠 Major

Avoid logging raw chat errors in UI flow.

console.error("Chat send error:", err) can surface sensitive data in logs. Log only a sanitized message (or route to a secure logger).

🔒 Proposed fix
-                    } catch (err) {
-                      await haptic("error");
-                      console.error("Chat send error:", err);
-                    }
+                    } catch (err) {
+                      await haptic("error");
+                      console.error("Chat send error");
+                    }

As per coding guidelines, "Keep secrets out of git, logs, crash output, and LLM prompts".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/mobile/app/`(tabs)/agent.tsx around lines 453 - 470, Replace the raw
console.error call in the onPress handler that wraps chatActions.sendMessage
(inside the isLive branch) with a sanitized logging approach: remove or avoid
printing the full err object (do not interpolate err), instead call a secure
logger or processLogger with a minimal message like "Chat send failed" and
optional non-sensitive context (e.g., user action, request id) or generate an
error id to surface, while keeping the haptic("error") and the existing
error-handling flow around chatActions.sendMessage so sensitive data from err is
not written to stdout/stderr; locate this in the onPress async handler where
chatActions.sendMessage is invoked and the current console.error("Chat send
error:", err) appears.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 39-95: The sanitizeForDisplay function fails to redact camelCase
keys because SENSITIVE_KEYS/SENSITIVE_SECRET_KEYS contain mixed-case entries and
you compare lowerKey to them; normalize comparisons by lowercasing the
sensitive-key list (e.g., map SENSITIVE_SECRET_KEYS and SENSITIVE_KEYS to
lowercase when building keysToCheck) or lowercase each sk in the .some(...)
check so lowerKey === sk and lowerKey.endsWith(sk)/lowerKey.endsWith(`_${sk}`)
are comparing lowercased strings; update the keysToCheck creation and/or the
isSensitive check in sanitizeForDisplay accordingly to ensure camelCase secrets
like "privateKey" and "apiKey" are redacted.

In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 126-128: The code currently declares toolCallBuffers twice (an
outer and an inner shadowing declaration), which confuses state and makes the
finally cleanup incorrect; remove the inner declaration so there is a single
per-stream Map<string,string> used throughout (the one originally created as
"toolCallBuffers"), update any references inside the streaming handler to use
that single Map, and ensure the finally block clears or deletes entries from
that single toolCallBuffers instance instead of attempting to clean up a
shadowed variable; look for references to "toolCallBuffers" and the streaming
handler/finally block to apply this change.
- Around line 66-83: The emission gate currently checks Object.keys(args).length
> 0 which skips valid empty-argument tool calls; update the logic around the
parsing block in openai-adapter.ts (the buffer/args/toolCallBuffers parsing for
tcId) to track successful JSON parse (e.g., set a parsed flag when
JSON.parse(buffer) runs and toolCallBuffers.delete(tcId) is called) and change
the emit condition from Object.keys(args).length > 0 || !buffer to use that
parsed flag (e.g., parsed || !buffer) so we emit even when args is an empty
object.

In `@apps/mobile/lib/agent/use-agent-chat.ts`:
- Around line 385-392: In the catch block inside useAgentChat (where setState is
called on error), remove the raw console.error(err) and instead log only a
sanitized message (e.g., processLogger?.warn or console.warn with no sensitive
payload) and update state to clear all streaming flags: set isResponding: false
and isStreaming: false on the root state and also map over state.messages (or
the messages property) to set message.isStreaming = false for any message
currently streaming; keep the user-facing generic error text unchanged.

---

Outside diff comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 453-470: Replace the raw console.error call in the onPress handler
that wraps chatActions.sendMessage (inside the isLive branch) with a sanitized
logging approach: remove or avoid printing the full err object (do not
interpolate err), instead call a secure logger or processLogger with a minimal
message like "Chat send failed" and optional non-sensitive context (e.g., user
action, request id) or generate an error id to surface, while keeping the
haptic("error") and the existing error-handling flow around
chatActions.sendMessage so sensitive data from err is not written to
stdout/stderr; locate this in the onPress async handler where
chatActions.sendMessage is invoked and the current console.error("Chat send
error:", err) appears.

---

Duplicate comments:
In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 44-65: The current loop over toolCalls drops fragments when later
deltas omit tc.id or tc.function.name; modify the accumulation logic in the
block handling toolCalls so you key buffers by the tool_calls array index (e.g.,
use the loop index) rather than requiring tc.id, and when you first encounter a
fragment record the tc.id and tc.function.name into a stored metadata object
(alongside the buffer) and thereafter always append tc.function.arguments
(newArgsFragment) to the buffer even if id/name are missing; update usages of
toolCallBuffers and the local variables tcId/funcName/newArgsFragment to
read/write from that index-keyed entry and preserve the original id/name when
they first appear.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@apps/mobile/lib/agent-runtime/openai-adapter.ts`:
- Around line 81-91: The tool-call emission logic skips calls when args is {}
even if parsing succeeded because the local buffer still contains the raw
string; introduce a parseSucceeded flag set to true when JSON.parse of buffer
succeeds (in the same scope where args is replaced and the bufferMap entry is
deleted), and change the emission predicate in the block that builds the
tool_call (the check around Object.keys(args).length > 0 || !buffer) to also
emit when parseSucceeded is true (i.e., Object.keys(args).length > 0 || !buffer
|| parseSucceeded). This ensures calls like get_balances with empty {} args are
emitted while preserving existing behavior for non-parsed buffers; update
references to args, buffer, tcId, and funcName accordingly.

Copy link
Copy Markdown
Contributor

@Calcutatator Calcutatator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good architectural direction — the dual-hook pattern (demo + live), SSE streaming parser, and ToolCallCard component are well-conceived. However there are several critical issues that need fixing before merge:

Critical (must fix):

  1. sanitizeForDisplay secret redaction is broken. The SENSITIVE_SECRET_KEYS array uses camelCase ("privateKey", "apiKey") but the comparison lowercases the key first (lowerKey === sk). Since "privatekey" !== "privateKey", secrets are never redacted. This is a real security defect in a wallet app. Fix: lowercase the entries in the list, or compare lowerKey === sk.toLowerCase().

  2. OpenAI tool calling protocol is incomplete. When sending tool results back to the API, the assistant message must include the tool_calls array (the raw tool call objects with id/function/arguments), and each tool result message needs tool_call_id serialized as tool_call_id in the HTTP body. Without these, the follow-up API call will fail or produce incorrect behavior.

  3. Tool call argument streaming drops subsequent chunks. The SSE parser guard if (tc?.id && tc?.function?.name) only matches the first chunk per tool call — OpenAI sends id and function.name only on the first chunk, with subsequent chunks carrying just argument fragments keyed by index. Arguments will be truncated for any non-trivial tool call.

  4. No programmatic confirmation gate for execute_transfer. The system prompt says "always confirm before executing transfers" but this is a soft LLM instruction, not an enforced constraint. A prompt injection via crafted input could bypass this. There should be a hard gate in code — e.g., require explicit user approval in the UI before executeTool("execute_transfer", ...) runs.

Major (should fix):

  1. Stale closure on state.messages in the sendMessage callback. useCallback captures state.messages from the closure, but setState has already been called. After multiple exchanges, the conversation history sent to the LLM will be incomplete. Use a useRef for messages or derive from functional setState.

  2. No wallet context passed to tool execution. executeTool(tc.name, tc.arguments) doesn't include which account to operate on. Tools may not know which wallet to use.

  3. Raw error objects logged via console.error could contain API keys in network error payloads. Use sanitized logging.

Minor:

  • JSON completeness heuristic (startsWith("{") && endsWith("}")) is fragile — {"a": "{b}"} would false-match mid-accumulation. Prefer try/catch JSON.parse.
  • Model is hardcoded to "gpt-4o-mini" — should respect user's configured model.
  • No activity logging for tool executions — at minimum execute_transfer results should be logged.
  • Commits should be squashed before merge ("update 1" through "update 5").

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 23, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 23, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/mobile/app/`(tabs)/agent.tsx:
- Around line 463-470: The code in the isLive branch catches errors from
chatActions.sendMessage and currently logs the raw err, which may expose
secrets; change the catch to log only a safe identifier (e.g., err.name or a
sanitized message) and keep the haptic("error") call; locate the try/catch
around chatActions.sendMessage in the isLive branch of the component
(references: isLive, chatActions.sendMessage, haptic) and replace
console.error("Chat send error:", err) with a console.error call that emits a
non-sensitive value such as err.name or a fixed message combined with err.name.

In `@apps/mobile/lib/agent/use-agent-chat.ts`:
- Around line 214-221: The helper callLlm references the toolDefs variable
declared later, which is confusing; move the toolDefs declaration so it appears
before the callLlm function. Update the file so toolDefs is defined first, then
define callLlm to use toolDefs (keep the same signature and behavior for callLlm
and preserve the fallback logic tools && tools.length > 0 ? tools : toolDefs).
This reordering improves readability and prevents future refactor bugs related
to declaration order.
- Around line 376-391: The assistant message must include the original
tool_calls metadata and the ChatMessage type must support it: add an optional
toolCalls field to the ChatMessage/LlmMessage type in types.ts that matches
OpenAI's schema (array of {id, type, function: {name}, arguments}), then when
building toolContextMsg in use-agent-chat.ts populate toolContextMsg.toolCalls
from currentToolCalls (construct id, type, function.name and arguments) so the
assistant message reflects the executed calls, and finally update the OpenAI
adapter's request serializer to include tool_calls from ChatMessage/toolCalls
into the outgoing request body so callLlm(updatedMessages, ...) sends the
tool_calls array.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 557b998 and db4de44.

📒 Files selected for processing (3)
  • apps/mobile/app/(tabs)/agent.tsx
  • apps/mobile/lib/agent-runtime/openai-adapter.ts
  • apps/mobile/lib/agent/use-agent-chat.ts

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 23, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 7 comments

Edit Code Review Agent Settings | Greptile

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 23, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Feb 24, 2026

Additional Comments (2)

apps/mobile/lib/agent/use-agent-chat.ts
Authorization check blocks execute_transfer but still returns blocked message to LLM and updates UI. The system prompt (line 547) tells the LLM that execute_transfer requires manual approval in the Transfer tab, but when the LLM tries to execute it, the tool returns "Blocked: this tool requires manual approval" as an error.

This creates poor UX: the LLM will likely apologize and tell the user to use the Transfer tab, but the Transfer tab integration isn't wired (per CLAUDE.md line 6: "Live Starknet execution libs exist but aren't wired to UI yet"). The agent can't actually help with transfers.

Either remove execute_transfer from the tool definitions entirely, or implement proper approval flow integration with the Transfer tab.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/mobile/lib/agent/use-agent-chat.ts
Line: 862-876

Comment:
Authorization check blocks `execute_transfer` but still returns blocked message to LLM and updates UI. The system prompt (line 547) tells the LLM that `execute_transfer` requires manual approval in the Transfer tab, but when the LLM tries to execute it, the tool returns "Blocked: this tool requires manual approval" as an error.

This creates poor UX: the LLM will likely apologize and tell the user to use the Transfer tab, but the Transfer tab integration isn't wired (per CLAUDE.md line 6: "Live Starknet execution libs exist but aren't wired to UI yet"). The agent can't actually help with transfers.

Either remove `execute_transfer` from the tool definitions entirely, or implement proper approval flow integration with the Transfer tab.

How can I resolve this? If you propose a fix, please make it concise.

apps/mobile/lib/agent/use-agent-chat.ts
executeTool executes without wallet context validation. The tool registry (core-tools.ts:250-329) calls executeTransfer which moves real funds using session keys, but executeTool at line 878 has no awareness of:

  • Which wallet is active
  • Whether the wallet has sufficient balance
  • Whether session keys are initialized
  • Network mismatch between tool params and wallet

get_balances tool requires network param (core-tools.ts:74-80) but execute_transfer and other tools don't validate that the provided network matches the active wallet's network before execution.

Add wallet context to tool execution or validate network consistency.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/mobile/lib/agent/use-agent-chat.ts
Line: 878

Comment:
`executeTool` executes without wallet context validation. The tool registry (core-tools.ts:250-329) calls `executeTransfer` which moves real funds using session keys, but `executeTool` at line 878 has no awareness of:
- Which wallet is active
- Whether the wallet has sufficient balance
- Whether session keys are initialized
- Network mismatch between tool params and wallet

`get_balances` tool requires `network` param (core-tools.ts:74-80) but `execute_transfer` and other tools don't validate that the provided network matches the active wallet's network before execution.

Add wallet context to tool execution or validate network consistency.

How can I resolve this? If you propose a fix, please make it concise.

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

@omarespejel
Copy link
Copy Markdown
Collaborator

@greptile-ai review

@omarespejel
Copy link
Copy Markdown
Collaborator

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines +405 to +418
if (!AUTO_EXECUTABLE_TOOLS.has(tc.name)) {
const blockedResult =
"Blocked: this tool requires manual approval. Use the Transfer tab to run write operations.";
toolResults.push({
tool_call_id: tc.id,
content: blockedResult,
});
setState((s) => ({
...s,
toolCalls: s.toolCalls.map((t) =>
t.id === tc.id ? { ...t, result: blockedResult, status: "error" } : t
),
}));
continue;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prepare_transfer requires manual approval but should be auto-executable. It's a read-only validation operation that doesn't execute transfers - blocking it cripples the agent's ability to help users plan transactions.

Per CLAUDE.md:54: "Don't let the model construct raw calldata unsafely" - prepare_transfer doesn't construct calldata for execution, it only validates parameters. The actual execution in execute_transfer is correctly blocked.

Only execute_transfer needs manual approval. Move prepare_transfer to AUTO_EXECUTABLE_TOOLS:

Suggested change
if (!AUTO_EXECUTABLE_TOOLS.has(tc.name)) {
const blockedResult =
"Blocked: this tool requires manual approval. Use the Transfer tab to run write operations.";
toolResults.push({
tool_call_id: tc.id,
content: blockedResult,
});
setState((s) => ({
...s,
toolCalls: s.toolCalls.map((t) =>
t.id === tc.id ? { ...t, result: blockedResult, status: "error" } : t
),
}));
continue;
for (const tc of currentToolCalls) {
if (!AUTO_EXECUTABLE_TOOLS.has(tc.name)) {

And update line 68:

const AUTO_EXECUTABLE_TOOLS = new Set(["get_balances", "estimate_fee", "prepare_transfer"]);
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/mobile/lib/agent/use-agent-chat.ts
Line: 405-418

Comment:
`prepare_transfer` requires manual approval but should be auto-executable. It's a read-only validation operation that doesn't execute transfers - blocking it cripples the agent's ability to help users plan transactions.

Per CLAUDE.md:54: "Don't let the model construct raw calldata unsafely" - `prepare_transfer` doesn't construct calldata for execution, it only validates parameters. The actual execution in `execute_transfer` is correctly blocked.

Only `execute_transfer` needs manual approval. Move `prepare_transfer` to `AUTO_EXECUTABLE_TOOLS`:

```suggestion
        for (const tc of currentToolCalls) {
          if (!AUTO_EXECUTABLE_TOOLS.has(tc.name)) {
```

And update line 68:
```typescript
const AUTO_EXECUTABLE_TOOLS = new Set(["get_balances", "estimate_fee", "prepare_transfer"]);
```

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants