|
| 1 | +# Scaffolding Framework Examples for AReaL |
| 2 | + |
| 3 | +This directory contains examples demonstrating how to use the Scaffolding framework with |
| 4 | +AReaL for reinforcement learning training. |
| 5 | + |
| 6 | +## Overview |
| 7 | + |
| 8 | +The scaffolding framework provides a modular and extensible way to compose |
| 9 | +various methods with RL training. It decouples the inference logic |
| 10 | +(Controllers) from the execution backend (Workers), enabling flexible composition of |
| 11 | +different methods. With Scaffolding, we can flexibly compose various rollout, reward, and trajectory tracing methods. |
| 12 | + |
| 13 | +### Key Components |
| 14 | + |
| 15 | +1. **Controller**: Defines the inference-time compute logic (e.g., generation, reward |
| 16 | + computation) |
| 17 | +1. **Worker**: Handles the actual execution of tasks (e.g., TRT-LLM, OpenAI API) |
| 18 | +1. **ScaffoldingLlm**: Orchestrates controllers and workers together |
| 19 | +1. **ScaffoldingWorkflow**: Wraps ScaffoldingLlm as a RolloutWorkflow for AReaL training |
| 20 | + |
| 21 | +### AReaL-Specific Components |
| 22 | + |
| 23 | +The following components are implemented in `examples/scaffolding/`: |
| 24 | + |
| 25 | +- **`CreateWorkerFromEngine`**: Creates a scaffolding Worker from AReaL's |
| 26 | + InferenceEngine (e.g., RemoteSGLangEngine). The returned Worker is similar to |
| 27 | + scaffolding's `OpenaiWorker` but integrated with AReaL's engine. |
| 28 | + |
| 29 | +- **`RLVRRewardController`**: A Controller that computes rewards for generated samples |
| 30 | + using verifiable reward functions (e.g., math answer verification). |
| 31 | + |
| 32 | +- **`PipelineTrajectoryMaker`**: A Controller that composes generation and reward |
| 33 | + controllers into a pipeline that produces training trajectories. |
| 34 | + |
| 35 | +- **`ScaffoldingWorkflow`**: A `RolloutWorkflow` implementation that wraps |
| 36 | + ScaffoldingLlm for integration with AReaL's training pipeline. |
| 37 | + |
| 38 | +## RLVR Example with GSM8K |
| 39 | + |
| 40 | +### Quick Start |
| 41 | + |
| 42 | +```bash |
| 43 | +python examples/scaffolding/gsm8k_rlvr_scaffolding.py \ |
| 44 | + --config examples/scaffolding/gsm8k_rlvr_scaffolding.yaml |
| 45 | +``` |
| 46 | + |
| 47 | +### Architecture |
| 48 | + |
| 49 | +The scaffolding workflow follows this pattern from the RFC: |
| 50 | + |
| 51 | +```python |
| 52 | +# Step 1: Create Worker from the SGLang engine |
| 53 | +rollout_worker = CreateWorkerFromEngine(engine) |
| 54 | + |
| 55 | +# Step 2: Create controllers |
| 56 | +rollout_controller = NativeGenerationController() |
| 57 | +reward_controller = RLVRRewardController(gsm8k_reward_fn) |
| 58 | + |
| 59 | +# Step 3: Create trajectory maker (composes the controllers) |
| 60 | +trajectory_maker = PipelineTrajectoryMaker(rollout_controller, reward_controller) |
| 61 | + |
| 62 | +# Step 4: Create ScaffoldingLlm (orchestrates controllers with workers) |
| 63 | +scaffolding_llm = ScaffoldingLlm( |
| 64 | + trajectory_maker, |
| 65 | + {NativeGenerationController.WorkerTag.GENERATION: rollout_worker}, |
| 66 | +) |
| 67 | + |
| 68 | +# Step 5: Create ScaffoldingWorkflow (wraps as RolloutWorkflow) |
| 69 | +scaffolding_workflow = ScaffoldingWorkflow(scaffolding_llm) |
| 70 | +``` |
| 71 | + |
| 72 | +### Data Flow Diagram |
| 73 | + |
| 74 | +``` |
| 75 | + ┌─────────────────────────────────────────────────┐ |
| 76 | + │ ScaffoldingWorkflow │ |
| 77 | + │ │ |
| 78 | + │ ┌───────────────────────────────────────────┐ │ |
| 79 | + │ │ ScaffoldingLlm │ │ |
| 80 | + │ │ │ │ |
| 81 | + │ │ ┌─────────────────────────────────────┐ │ │ |
| 82 | + │ │ │ PipelineTrajectoryMaker │ │ │ |
| 83 | + │ │ │ │ │ │ |
| 84 | + │ │ │ ┌───────────────────────────────┐ │ │ │ |
| 85 | +Data ─────────────────────────┼──┼──┼──► NativeGenerationController │ │ │ │ |
| 86 | + │ │ │ │ (from scaffolding.core) │ │ │ │ |
| 87 | + │ │ │ └───────────────┬───────────────┘ │ │ │ |
| 88 | + │ │ │ │ │ │ │ |
| 89 | + │ │ │ ▼ │ │ │ |
| 90 | + │ │ │ ┌───────────────────────────────┐ │ │ │ |
| 91 | + │ │ │ │ RLVRRewardController │ │ │ │ |
| 92 | + │ │ │ │ (from areal.experimental) │ │ │ │ |
| 93 | + │ │ │ └───────────────┬───────────────┘ │ │ │ |
| 94 | + │ │ │ │ │ │ │ |
| 95 | + │ │ └──────────────────┼──────────────────┘ │ │ |
| 96 | + │ │ │ │ │ |
| 97 | + │ └─────────────────────┼─────────────────────┘ │ |
| 98 | + │ │ │ |
| 99 | + └────────────────────────┼────────────────────────┘ |
| 100 | + │ |
| 101 | + ▼ Trajectories |
| 102 | + ┌─────────────────────────────┐ |
| 103 | + │ PPOTrainer │ |
| 104 | + │ (GRPO/PPO Training) │ |
| 105 | + └─────────────────────────────┘ |
| 106 | + │ |
| 107 | + via CreateWorkerFromEngine │ |
| 108 | + ▼ |
| 109 | + ┌─────────────────────────────────────────┐ |
| 110 | + │ RemoteSGLangEngine │ |
| 111 | + │ (AReaL Inference Backend) │ |
| 112 | + └─────────────────────────────────────────┘ |
| 113 | +``` |
| 114 | + |
| 115 | +### How It Works |
| 116 | + |
| 117 | +1. **Engine Initialization**: `RemoteSGLangEngine` is initialized with the rollout |
| 118 | + configuration and connected to the model server. |
| 119 | + |
| 120 | +1. **Worker Creation**: `CreateWorkerFromEngine(engine)` wraps the engine into a |
| 121 | + scaffolding-compatible Worker. This allows scaffolding controllers to use AReaL's |
| 122 | + inference backends. |
| 123 | + |
| 124 | +1. **Controller Pipeline**: |
| 125 | + |
| 126 | + - `NativeGenerationController()`: Handles text generation by yielding |
| 127 | + `GenerationTask` objects to the Worker. |
| 128 | + - `RLVRRewardController(reward_fn)`: Computes rewards for generated samples using the |
| 129 | + provided reward function. |
| 130 | + - `PipelineTrajectoryMaker(gen_ctrl, reward_ctrl)`: Composes these controllers into a |
| 131 | + pipeline that produces training trajectories. |
| 132 | + |
| 133 | +1. **ScaffoldingLlm**: Orchestrates the trajectory maker with the worker, handling the |
| 134 | + async execution of tasks. |
| 135 | + |
| 136 | +1. **ScaffoldingWorkflow**: Wraps the ScaffoldingLlm as a `RolloutWorkflow` that can be |
| 137 | + used directly with AReaL's `PPOTrainer`. |
| 138 | + |
| 139 | +1. **Training**: The trainer calls the workflow to generate trajectories, which are then |
| 140 | + used for GRPO/PPO training. |
| 141 | + |
| 142 | +### Configuration |
| 143 | + |
| 144 | +See `gsm8k_rlvr_scaffolding.yaml` for the full configuration. Key options: |
| 145 | + |
| 146 | +```yaml |
| 147 | +# Model configuration |
| 148 | +pretrain_path: Qwen/Qwen2.5-3B-Instruct |
| 149 | +tokenizer_path: Qwen/Qwen2.5-3B-Instruct |
| 150 | + |
| 151 | +# Generation hyperparameters |
| 152 | +gconfig: |
| 153 | + max_new_tokens: 1024 |
| 154 | + temperature: 1.0 |
| 155 | + top_p: 1.0 |
| 156 | + n_samples: 8 |
| 157 | + |
| 158 | +# Inference engine configuration |
| 159 | +engine: |
| 160 | + type: sglang |
| 161 | + tp: 1 |
| 162 | + max_model_len: 4096 |
| 163 | +``` |
| 164 | +
|
| 165 | +## Extending the Framework |
| 166 | +
|
| 167 | +### Custom Reward Controllers |
| 168 | +
|
| 169 | +You can create custom reward controllers by subclassing the base Controller: |
| 170 | +
|
| 171 | +```python |
| 172 | +from examples.scaffolding._compat import Controller |
| 173 | + |
| 174 | +class CustomRewardController(Controller): |
| 175 | + def __init__(self, reward_fn): |
| 176 | + super().__init__() |
| 177 | + self.reward_fn = reward_fn |
| 178 | + |
| 179 | + def process(self, tasks, **kwargs): |
| 180 | + # Compute rewards for completed generation tasks |
| 181 | + for task in tasks: |
| 182 | + reward = self.reward_fn( |
| 183 | + prompt=task.input_str, |
| 184 | + completion=task.output_str, |
| 185 | + **kwargs |
| 186 | + ) |
| 187 | + task.customized_result_fields["reward"] = reward |
| 188 | + yield tasks |
| 189 | +``` |
| 190 | + |
| 191 | +### Custom Trajectory Makers |
| 192 | + |
| 193 | +For different RL algorithms, you may need different trajectory formats: |
| 194 | + |
| 195 | +```python |
| 196 | +from examples.scaffolding._compat import Controller |
| 197 | +import torch |
| 198 | + |
| 199 | +class CustomTrajectoryMaker(Controller): |
| 200 | + def __init__(self, generation_controller, reward_controller): |
| 201 | + super().__init__() |
| 202 | + self.generation_controller = generation_controller |
| 203 | + self.reward_controller = reward_controller |
| 204 | + |
| 205 | + def process(self, tasks, **kwargs): |
| 206 | + # Run generation |
| 207 | + yield from self.generation_controller.process(tasks, **kwargs) |
| 208 | + |
| 209 | + # Run reward computation |
| 210 | + yield from self.reward_controller.process(tasks, **kwargs) |
| 211 | + |
| 212 | + # Build trajectories |
| 213 | + trajectories = [] |
| 214 | + for task in tasks: |
| 215 | + trajectory = { |
| 216 | + "input_ids": torch.tensor(task.output_tokens), |
| 217 | + "rewards": torch.tensor(task.customized_result_fields["reward"]), |
| 218 | + } |
| 219 | + trajectories.append(trajectory) |
| 220 | + yield trajectories |
| 221 | +``` |
| 222 | + |
| 223 | +## References |
| 224 | + |
| 225 | +- [TensorRT-LLM Scaffolding README](https://github.com/NVIDIA/TensorRT-LLM/tree/main/tensorrt_llm/scaffolding) |
| 226 | +- [AReaL Workflow Documentation](../../docs/customization/workflow.md) |
| 227 | +- [RFC: Scaffolding Integration](https://github.com/inclusionAI/AReaL/issues/818) |
0 commit comments