Skip to content

Add MiniMax as a built-in LLM provider#41

Open
octo-patch wants to merge 1 commit intospcl:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as a built-in LLM provider#41
octo-patch wants to merge 1 commit intospcl:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax M2.7 / M2.7-highspeed as a third language model backend alongside ChatGPT and LLaMA-2
  • New MiniMax class extending AbstractLanguageModel, using the OpenAI-compatible API at https://api.minimax.io/v1
  • Temperature clamping to [0, 1], per-call response handling (MiniMax does not support the n parameter), and cumulative token/cost tracking

Changes

File Description
graph_of_thoughts/language_models/minimax.py New MiniMax class with query(), chat(), get_response_texts()
graph_of_thoughts/language_models/init.py Export MiniMax
graph_of_thoughts/language_models/config_template.json Add minimax and minimax-highspeed config entries
graph_of_thoughts/language_models/README.md MiniMax setup documentation with config table
README.md Quick Start code comments showing MiniMax usage
tests/test_minimax.py 21 unit tests (init, query, caching, token tracking, config template)
tests/test_minimax_integration.py 3 integration tests (simple query, multiple responses, highspeed model)

Usage

from graph_of_thoughts import language_models

# Set MINIMAX_API_KEY env var or fill api_key in config.json
lm = language_models.MiniMax("config.json", model_name="minimax")

Test plan

  • 21 unit tests pass (mocked API calls)
  • 3 integration tests pass (real MiniMax API calls with MINIMAX_API_KEY)
  • Verify existing ChatGPT and LLaMA tests are unaffected

Add MiniMax M2.7 and M2.7-highspeed as language model backends using
the OpenAI-compatible API at https://api.minimax.io/v1. Follows the
existing AbstractLanguageModel pattern with temperature clamping (0-1),
per-call response handling (no n parameter), and token/cost tracking.

- New MiniMax class extending AbstractLanguageModel
- Config template entries for minimax and minimax-highspeed
- MINIMAX_API_KEY env var support with config fallback
- Language Models and main README documentation
- 21 unit tests + 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant