|
| 1 | +# Reasoning Content Support |
| 2 | + |
| 3 | +The InstructLab Training library supports structured reasoning traces through the `reasoning_content` field in message samples. This feature enables training models that can separate their thinking process from their final output. |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +The `reasoning_content` field is an optional addition to the standard message format that allows you to include the model's internal reasoning process alongside the final response. This is particularly useful for: |
| 8 | + |
| 9 | +- Training reasoning-capable models that show their work |
| 10 | +- Supporting models that need to generate step-by-step reasoning |
| 11 | +- Enabling chain-of-thought style training data |
| 12 | +- Separating internal thinking from user-facing responses |
| 13 | + |
| 14 | +## Message Format |
| 15 | + |
| 16 | +### Standard Message Format |
| 17 | + |
| 18 | +```json |
| 19 | +{ |
| 20 | + "role": "assistant", |
| 21 | + "content": "The answer is 42." |
| 22 | +} |
| 23 | +``` |
| 24 | + |
| 25 | +### Extended Message Format with Reasoning Content |
| 26 | + |
| 27 | +```json |
| 28 | +{ |
| 29 | + "role": "assistant", |
| 30 | + "content": "The answer is 42.", |
| 31 | + "reasoning_content": "Let me think about this step by step. The question asks for the meaning of life, and according to The Hitchhiker's Guide to the Galaxy, the answer is 42." |
| 32 | +} |
| 33 | +``` |
| 34 | + |
| 35 | +## Data Processing Behavior |
| 36 | + |
| 37 | +When processing messages during training: |
| 38 | + |
| 39 | +1. **Unmasking Rules**: Both `content` and `reasoning_content` fields follow the same unmasking rules based on the message role |
| 40 | +2. **Template Integration**: Both fields are processed by the chat template and included in the tokenized output |
| 41 | +3. **Token Wrapping**: If a role is configured to be unmasked, both fields (when present) are wrapped with unmask tokens |
| 42 | +4. **Independent Fields**: Either field can exist independently - messages can have only `content`, only `reasoning_content`, or both |
| 43 | + |
| 44 | +## Usage Examples |
| 45 | + |
| 46 | +### Training Data with Reasoning Traces |
| 47 | + |
| 48 | +```json |
| 49 | +{ |
| 50 | + "messages": [ |
| 51 | + { |
| 52 | + "role": "user", |
| 53 | + "content": "What is 15 * 23?" |
| 54 | + }, |
| 55 | + { |
| 56 | + "role": "assistant", |
| 57 | + "reasoning_content": "I need to multiply 15 by 23. Let me break this down: 15 * 23 = 15 * (20 + 3) = 15 * 20 + 15 * 3 = 300 + 45 = 345", |
| 58 | + "content": "15 * 23 = 345" |
| 59 | + } |
| 60 | + ] |
| 61 | +} |
| 62 | +``` |
| 63 | + |
| 64 | +### Mixed Content Types |
| 65 | + |
| 66 | +```json |
| 67 | +{ |
| 68 | + "messages": [ |
| 69 | + { |
| 70 | + "role": "user", |
| 71 | + "content": "Solve this math problem step by step: 2x + 5 = 13" |
| 72 | + }, |
| 73 | + { |
| 74 | + "role": "assistant", |
| 75 | + "reasoning_content": "I need to solve for x. First, I'll subtract 5 from both sides: 2x = 8. Then divide by 2: x = 4.", |
| 76 | + "content": "To solve 2x + 5 = 13:\n1. Subtract 5 from both sides: 2x = 8\n2. Divide by 2: x = 4\n\nTherefore, x = 4." |
| 77 | + } |
| 78 | + ] |
| 79 | +} |
| 80 | +``` |
| 81 | + |
| 82 | +### Reasoning-Only Responses |
| 83 | + |
| 84 | +```json |
| 85 | +{ |
| 86 | + "messages": [ |
| 87 | + { |
| 88 | + "role": "user", |
| 89 | + "content": "Think about the implications of AI safety." |
| 90 | + }, |
| 91 | + { |
| 92 | + "role": "assistant", |
| 93 | + "reasoning_content": "This is a complex topic that requires careful consideration of multiple factors including alignment, capability control, and social implications..." |
| 94 | + } |
| 95 | + ] |
| 96 | +} |
| 97 | +``` |
| 98 | + |
| 99 | +## Implementation Details |
| 100 | + |
| 101 | +### Token Processing |
| 102 | + |
| 103 | +During data processing, the library: |
| 104 | + |
| 105 | +1. Wraps both `content` and `reasoning_content` with special unmask tokens (`<|UNMASK_BEGIN|>`, `<|UNMASK_END|>`, `<|UNMASK_REASONING_BEGIN|>`, `<|UNMASK_REASONING_END|>`) |
| 106 | +2. Applies the chat template to the combined message content |
| 107 | +3. Processes the tokenized sequence to create appropriate labels for training |
| 108 | +4. Removes the special unmask tokens from the final training data |
| 109 | + |
| 110 | +### Validation |
| 111 | + |
| 112 | +The library validates that: |
| 113 | + |
| 114 | +- Both `content` and `reasoning_content` must be strings if present |
| 115 | +- Special unmask tokens are properly processed and removed |
| 116 | +- The final training data contains no residual unmask tokens |
| 117 | + |
| 118 | +### Error Handling |
| 119 | + |
| 120 | +Common errors and their meanings: |
| 121 | + |
| 122 | +- `"unmasking non-string data types is currently unsupported"`: The `content` field contains non-string data |
| 123 | +- `"received an entry for reasoning_content which was not a string"`: The `reasoning_content` field contains non-string data |
| 124 | + |
| 125 | +## Integration with Existing Features |
| 126 | + |
| 127 | +### Unmasking Policies |
| 128 | + |
| 129 | +The `reasoning_content` field respects all existing unmasking policies: |
| 130 | + |
| 131 | +- When `unmask=true` is set on a sample, both fields are unmasked for non-system roles |
| 132 | +- When `unmask=false` (default), only assistant role messages are unmasked |
| 133 | +- Custom unmask role configurations work with both fields |
| 134 | + |
| 135 | +### Chat Templates |
| 136 | + |
| 137 | +The `reasoning_content` is unsupported by the legacy chat templates and will not be rendered. |
| 138 | + |
| 139 | +### Backward Compatibility |
| 140 | + |
| 141 | +The feature is fully backward compatible: |
| 142 | + |
| 143 | +- Existing datasets without `reasoning_content` continue to work unchanged |
| 144 | +- All existing training configurations and arguments remain valid |
| 145 | + |
| 146 | +## Testing |
| 147 | + |
| 148 | +The library includes comprehensive tests for reasoning content functionality: |
| 149 | + |
| 150 | +- Unit tests for message wrapping and processing |
| 151 | +- Integration tests with real tokenizers |
| 152 | +- Validation tests for error conditions |
| 153 | +- Backward compatibility tests |
| 154 | + |
| 155 | +## Important Notes |
| 156 | + |
| 157 | +### Automatic Processing Behavior |
| 158 | + |
| 159 | +1. **Always processed when present**: If `reasoning_content` exists in a message, it will always be processed and unmasked as long as the message role is targeted for unmasking. This ensures that reasoning traces are properly included in the training data without requiring additional configuration. |
| 160 | + |
| 161 | +2. **DeepSeek R1 and Qwen3 compatibility**: Models using the DeepSeek R1 thought processor (such as Qwen3) **must** supply their thinking traces in the `reasoning_content` field to be processed correctly. Failure to do so may result in improper handling of reasoning tokens and suboptimal training performance. |
| 162 | + |
| 163 | +3. **Separate token handling**: The library uses distinct unmask tokens for reasoning content (`<|UNMASK_REASONING_BEGIN|>` and `<|UNMASK_REASONING_END|>`) versus regular content (`<|UNMASK_BEGIN|>` and `<|UNMASK_END|>`), allowing for proper differentiation during training. |
| 164 | + |
| 165 | +## Best Practices |
| 166 | + |
| 167 | +1. **Consistent Usage**: When applicable, use `reasoning_content` consistently within a dataset for best results |
| 168 | +2. **Clear Separation**: Keep reasoning traces separate from final outputs for clarity |
| 169 | +3. **Template Compatibility**: Ensure your chat template properly handles both fields |
| 170 | +4. **Validation**: Test your data processing pipeline with small samples before full training |
| 171 | + |
| 172 | +## Migration Guide |
| 173 | + |
| 174 | +To add reasoning content support to existing datasets: |
| 175 | + |
| 176 | +1. Add `reasoning_content` fields to relevant messages |
| 177 | +2. Ensure content is in string format |
| 178 | +3. Test with a small sample using the data processing pipeline |
| 179 | +4. Verify that unmask tokens are properly processed |
| 180 | + |
| 181 | +No changes to training arguments or configuration are required. |
0 commit comments