Skip to content

SAYOUNCDR/GhostContext

Repository files navigation

GhostContext

TypeScript VS Code Node.js

A low-latency, context-aware AI tab-completion engine for Visual Studio Code. GhostContext features a high-performance inference architecture supporting BYOK (FIM-compatible endpoints), telemetry-free local LLM execution, and granular AST-driven context gathering.

Features

  • Bring Your Own Key (BYOK): Directly integrate with any standard API endpoint supporting Fill-In-The-Middle (FIM) models (e.g., DeepSeek-Coder-V2, Codestral).
  • Local Execution Engine: Native support for local inference via Ollama or LM Studio, ensuring zero telemetry and absolute codebase privacy.
  • Managed Tier: Built-in integration for a managed inference proxy, allowing seamless onboarding and testing via free compute credits.
  • Sub-200ms Latency: Optimized request debouncing and context pruning to ensure rapid ghost text rendering without rate-limit exhaustion.

Architecture Structure

The project is structured to separate editor UI from context processing:

  • Client Extension: Built in TypeScript utilizing the VS Code InlineCompletionItemProvider API for non-blocking UI rendering.
  • Context Engine: Parses the active document and neighboring tabs to construct optimal prompt structures (Prefix + Suffix) for FIM models.
  • Inference Proxy (Optional Backend): Compatible with high-performance network runtimes (ike Bun or Go) to handle rate-limiting, authentication, and token management for the managed tier.

Installation (Development)

  1. Clone the repository:
    git clone https://github.com/SAYOUNCDR/GhostContext.git

About

A low-latency, context-aware AI tab-completion engine for VS Code. Features a high-performance inference proxy supporting BYOK (FIM-compatible endpoints), telemetry-free local LLM execution via Ollama, and granular AST-driven context gathering.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors