Replies: 1 comment
-
|
Thanks for the idea! I moved this over to a discussion. To start, I'd be curious to hear more about the situations where it stops due to token limit issues... do you mean context size limits or running out of credits? I know you're using custom models, but at least with the default models, hitting context size limits shouldn't normally be happening unless you're dealing with some massive (100k+ token) files. Custom models could be another story though of course. I'm also curious to hear more about other kinds of errors it's getting stuck on. Are you talking about using full auto mode and pausing after 5 debug tries? Or an actual error message from Plandex? If it's the latter I'd consider that a bug to fix, so would like to hear about it. For the former, I definitely have ideas about beefing up debugging—perhaps subbing in a reasoning model or a model with web search abilities after a couple failures with similar output. But yeah, short answer is I think there's definitely potential here. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
One of the best ways I use Plandex is to let it run in the background, completing some analysis or a new section of the code.
There are many times though where I come back and plandex is having an insurmountable issue (for example: too many tokens to continue this plan) and it usually stops there, making it sometimes hard to even just manually recover the situation.
There are some errors that we know before that they are not going to be something that the system can pass by itself. So I was thinking that an option would be to add another role where it will be something like the "bailout" model, which hopefully has a high context as well as smart enough to get out of sticky situations.
This could be maybe something that it's only available in the full auto mode and even then maybe it's something that is configurable of how many times it could try before giving up.
This will also help for situations where we are using a local weaker model to try to process the most of the code and using the "big brother" to help out when stuck.
What do you think?
Beta Was this translation helpful? Give feedback.
All reactions