Add a minimalist simple chat UI based on gradio chat components with the only 1 purpose - allow you to use LLM on the same GPU where you use Comfy
Proposal:
- Can enable in settings
- A separate UI available on a link
- Interrupt and restart the current task in mcww uii, keep it on pause while llm generating answer
- Unload LLM callback that is executed before comfy ui generation beginning
- Clear cache and models before calling LLM
Add a minimalist simple chat UI based on gradio chat components with the only 1 purpose - allow you to use LLM on the same GPU where you use Comfy
Proposal: