Skip to content

Releases: plandex-ai/plandex

Release cli/v2.0.0

19 Mar 05:42

Choose a tag to compare

👋 Hi, Dane here. I'm the creator and lead developer of Plandex.

I'm excited to announce the release of Plandex v2, featuring major improvements in capabilities, user experience, and automation.

🤖  Overview

While built on the same basic foundations as v1, v2 is best thought of as a new project with far more ambitious goals.

Plandex is now a top-tier coding agent with fully autonomous capabilities.

By default, it combines the strengths of three top foundation model providers—Anthropic, OpenAI, and Google—to achieve significantly better coding results than can be achieved with only a single provider's models.

You get the coding abilities of Anthropic, the cost-effectiveness and speed of OpenAI's o3 mini, and the massive 2M token context window of Google Gemini, each used in the roles they're best suited for.

Plandex can:

  • Discuss a project or feature at a high level
  • Load relevant context as needed throughout the discussion
  • Solidify the discussion into a detailed plan
  • Implement the changes
  • Apply the changes to your files
  • Run necessary commands
  • Automatically debug failures

Adding these capabilities together, Plandex can handle complex tasks that span entire large features or entire projects, generating 50-100 files or more in a single run.

Below is a more detailed look at what's new. You can also check out the updated README, website, and docs.

🧠  Newer, Smarter Models

  • New default model pack combining Claude 3.7 Sonnet, o3-mini, and Gemini 1.5 Pro.

  • A new set of built-in models and model packs for different use cases, including daily-driver (the default pack), strong, cheap, and oss packs, among others.

  • New architect and coder roles that make it easier to use different models for different stages in the planning and implementation process.

📥  Better Context Management

  • Automatic context selection with tree-sitter project maps (30+ languages supported).

  • Effective 2M token context window for large tasks (massive codebases of ~20M tokens and more can be indexed for automatic context selection).

  • Smart context management limits implementation steps to necessary files only, reducing costs and latency.

  • Prompt caching for OpenAI and Anthropic models further reduces latency and costs.

📝  Reliable File Edits

  • Much improved file editing performance and reliability, especially for large files.

  • Simple edits can often be applied deterministically without a model call, reducing costs and latency.

  • For more complex edits, validation and multiple fallbacks help ensure a very low failure rate.

  • Supports individual files up to 100k tokens.

  • On Plandex Cloud, a fine-tuned "instant apply" model further speeds up and reduces the cost of editing files up to 32k tokens in size. This is offered at no additional cost.

💻  New Developer Experience

  • v2 includes a new default way to use Plandex: the Plandex REPL. Just type plandex in any project directory to start the REPL.

  • Simple and intuitive chat-like experience.

  • Fuzzy autocomplete for commands and files, 'chat' vs. 'tell' modes that separate ideation from implementation, and a multi-line mode for friendly editing of long prompts.

  • All commands are still available as CLI calls directly from the terminal.

🚀  Configurable Automation

  • Plandex is now capable of full autonomy with 'full auto' mode. It can load necessary context, apply changes, execute commands, and automatically debug problems.

  • The automation level can be precisely configured depending on the task and your comfort level. A basic mode works just like Plandex v1, where files are loaded manually and execution is disabled. The new default in v2 is semi-auto, which enables automatic context loading, but still requires approval to apply changes and execute commands.

  • By default, Plandex now includes command execution (with approval) in its planning process. It can install dependencies, build and run code, run tests, and more.

  • Command execution is integrated with Plandex's diff review sandbox. Changes are tentatively applied before running commands, then rolled back if the command fails.

  • A new debug command allows for automated debugging of any terminal command. Use it with type checkers, linters, builds, tests, and more.

💳  Built-in Payments, Credits, and Budgeting on Plandex Cloud

  • Apart from the open source version of Plandex, which includes all core features, Plandex Cloud is a full-fledged product.

  • It offers two subscription options: an Integrated Models mode that requires no additional accounts or API keys, and a BYO API Key mode that allows you to use your own OpenAI and OpenRouter.ai accounts and API keys.

  • In Integrated Models mode, you buy credits from Plandex Cloud and manage billing centrally. It includes usage tracking and reporting via the usage command, as well as convenience and budgeting features like an auto-recharge threshold, a notification threshold on monthly spend, and an overall monthly limit. You can learn more about pricing here.

  • Billing settings are managed with a web dashboard (it can be accessed via the CLI with the billing command).

🪪  License Update

  • Plandex has transitioned from AGPL 3.0 to the MIT License, simplifying future open-source contributions and allowing easier integration of proprietary enhancements in Plandex Cloud and related products.

  • If you’ve previously contributed under AGPL and have concerns about this relicensing, please reach out.

🧰  And More

This isn't an exhaustive list! Apart from the above, there are many smaller features, bug fixes, and quality of life improvements. Give the updated docs a read for a full accounting of all commands and functionality.

🌟  Get Started

Go to the quickstart to get started with v2 in minutes.

Note: while built on the same foundations, Plandex v2 is designed to be a run separately and independently from v1. It's not an in-place upgrade. So there's nothing in particular you need to do to upgrade; just follow the quickstart as if you were a brand new user. More details here.

🙌  Don't Be A Stranger

Release cli/v2.0.0-rc.6

09 Mar 09:21

Choose a tag to compare

Likely final release candidate for v2

Release cli/v2.0.0-rc.5

04 Mar 19:58

Choose a tag to compare

Placeholder for v2 release testing

Release cli/v2.0.0-rc.4

04 Mar 18:51

Choose a tag to compare

Placeholder for v2 release testing

Release cli/v2.0.0-rc.3

04 Mar 18:23

Choose a tag to compare

Placeholder for v2 release testing

Release cli/v2.0.0-rc.2

27 Feb 09:02

Choose a tag to compare

Placeholder for v2 release testing

Release cli/v2.0.0-rc.1

27 Feb 08:07

Choose a tag to compare

Placeholder for v2 release testing

Release server/v1.1.1

21 Jun 17:14

Choose a tag to compare

  • Improvements to stream handling that greatly reduce flickering in the terminal when streaming a plan, especially when many files are being built simultaneously. CPU usage is also reduced on both the client and server side.
  • Claude 3.5 Sonnet model and model pack (via OpenRouter.ai) is now built-in.

Release cli/v1.1.1

21 Jun 17:17

Choose a tag to compare

Fix for terminal flickering when streaming plans 📺

Improvements to stream handling that greatly reduce flickering in the terminal when streaming a plan, especially when many files are being built simultaneously. CPU usage is also reduced on both the client and server side.

Claude 3.5 Sonnet model pack is now built-in 🧠

You can now easily use Claude 3.5 Sonnet with Plandex through OpenRouter.ai.

  1. Create an account at OpenRouter.ai if you don't already have one.
  2. Generate an OpenRouter API key.
  3. Run export OPENROUTER_API_KEY=... in your terminal.
  4. Run plandex set-model, select choose a model pack to change all roles at once and then choose either anthropic-claude-3.5-sonnet (which uses Claude 3.5 Sonnet for all heavy lifting and Claude 3 Haiku for lighter tasks) or anthropic-claude-3.5-sonnet-gpt-4o (which uses Claude 3.5 Sonnet for planning and summarization, gpt-4o for builds, and gpt-3.5-turbo for lighter tasks)

plandex-claude-3.5-sonnet

Remember, you can run plandex model-packs for details on all built-in model packs.

Release server/v1.1.0

11 Jun 15:10

Choose a tag to compare

  • Give notes added to context with plandex load -n 'some note' automatically generated names in context ls list.
  • Fixes for summarization and auto-continue issues that could Plandex to lose track of where it is in the plan and repeat tasks or do tasks out of order, especially when using tell and continue after the initial tell.
  • Improvements to the verification and auto-fix step. Plandex is now more likely to catch and fix placeholder references like "// ... existing code ..." as well as incorrect removal or overwriting of code.
  • After a context file is updated, Plandex is less likely to use an old version of the code from earlier in the conversation--it now uses the latest version much more reliably.
  • Increase wait times when receiving rate limit errors from OpenAI API (common with new OpenAI accounts that haven't spent $50).