|
| 1 | +# AI Inference in GitHub Actions |
| 2 | + |
| 3 | +[](https://github.com/super-linter/super-linter) |
| 4 | + |
| 5 | +[](https://github.com/actions/typescript-action/actions/workflows/check-dist.yml) |
| 6 | +[](https://github.com/actions/typescript-action/actions/workflows/codeql-analysis.yml) |
| 7 | +[](./badges/coverage.svg) |
| 8 | + |
| 9 | +Use AI models from [GitHub Models](https://github.com/marketplace/models) in |
| 10 | +your workflows. |
| 11 | + |
| 12 | +## Usage |
| 13 | + |
| 14 | +Create a workflow to use the AI inference action: |
| 15 | + |
| 16 | +```yaml |
| 17 | +name: 'AI inference' |
| 18 | +on: workflow_dispatch |
| 19 | + |
| 20 | +jobs: |
| 21 | + inference: |
| 22 | + permissions: |
| 23 | + models: read |
| 24 | + runs-on: ubuntu-latest |
| 25 | + steps: |
| 26 | + - name: Test Local Action |
| 27 | + id: inference |
| 28 | + uses: actions/ai-inference@v1 |
| 29 | + with: |
| 30 | + prompt: 'Hello!' |
| 31 | + |
| 32 | + - name: Print Output |
| 33 | + id: output |
| 34 | + run: echo "${{ steps.inference.outputs.response }}" |
| 35 | +``` |
| 36 | +
|
| 37 | +### Using a prompt file |
| 38 | +
|
| 39 | +You can also provide a prompt file instead of an inline prompt. The action |
| 40 | +supports both plain text files and structured `.prompt.yml` files: |
| 41 | + |
| 42 | +```yaml |
| 43 | +steps: |
| 44 | + - name: Run AI Inference with Text File |
| 45 | + id: inference |
| 46 | + uses: actions/ai-inference@v1 |
| 47 | + with: |
| 48 | + prompt-file: './path/to/prompt.txt' |
| 49 | +``` |
| 50 | + |
| 51 | +### Using GitHub prompt.yml files |
| 52 | + |
| 53 | +For more advanced use cases, you can use structured `.prompt.yml` files that |
| 54 | +support templating, custom models, and JSON schema responses: |
| 55 | + |
| 56 | +```yaml |
| 57 | +steps: |
| 58 | + - name: Run AI Inference with Prompt YAML |
| 59 | + id: inference |
| 60 | + uses: actions/ai-inference@v1 |
| 61 | + with: |
| 62 | + prompt-file: './.github/prompts/sample.prompt.yml' |
| 63 | + input: | |
| 64 | + var1: hello |
| 65 | + var2: ${{ steps.some-step.outputs.output }} |
| 66 | + var3: | |
| 67 | + Lorem Ipsum |
| 68 | + Hello World |
| 69 | +``` |
| 70 | + |
| 71 | +#### Simple prompt.yml example |
| 72 | + |
| 73 | +```yaml |
| 74 | +messages: |
| 75 | + - role: system |
| 76 | + content: Be as concise as possible |
| 77 | + - role: user |
| 78 | + content: 'Compare {{a}} and {{b}}, please' |
| 79 | +model: openai/gpt-4o |
| 80 | +``` |
| 81 | + |
| 82 | +#### Prompt.yml with JSON schema support |
| 83 | + |
| 84 | +```yaml |
| 85 | +messages: |
| 86 | + - role: system |
| 87 | + content: |
| 88 | + You are a helpful assistant that describes animals using JSON format |
| 89 | + - role: user |
| 90 | + content: |- |
| 91 | + Describe a {{animal}} |
| 92 | + Use JSON format as specified in the response schema |
| 93 | +model: openai/gpt-4o |
| 94 | +responseFormat: json_schema |
| 95 | +jsonSchema: |- |
| 96 | + { |
| 97 | + "name": "describe_animal", |
| 98 | + "strict": true, |
| 99 | + "schema": { |
| 100 | + "type": "object", |
| 101 | + "properties": { |
| 102 | + "name": { |
| 103 | + "type": "string", |
| 104 | + "description": "The name of the animal" |
| 105 | + }, |
| 106 | + "habitat": { |
| 107 | + "type": "string", |
| 108 | + "description": "The habitat the animal lives in" |
| 109 | + } |
| 110 | + }, |
| 111 | + "additionalProperties": false, |
| 112 | + "required": [ |
| 113 | + "name", |
| 114 | + "habitat" |
| 115 | + ] |
| 116 | + } |
| 117 | + } |
| 118 | +``` |
| 119 | + |
| 120 | +Variables in prompt.yml files are templated using `{{variable}}` format and are |
| 121 | +supplied via the `input` parameter in YAML format. |
| 122 | + |
| 123 | +### Using a system prompt file |
| 124 | + |
| 125 | +In addition to the regular prompt, you can provide a system prompt file instead |
| 126 | +of an inline system prompt: |
| 127 | + |
| 128 | +```yaml |
| 129 | +steps: |
| 130 | + - name: Run AI Inference with System Prompt File |
| 131 | + id: inference |
| 132 | + uses: actions/ai-inference@v1 |
| 133 | + with: |
| 134 | + prompt: 'Hello!' |
| 135 | + system-prompt-file: './path/to/system-prompt.txt' |
| 136 | +``` |
| 137 | + |
| 138 | +### Read output from file instead of output |
| 139 | + |
| 140 | +This can be useful when model response exceeds actions output limit |
| 141 | + |
| 142 | +```yaml |
| 143 | +steps: |
| 144 | + - name: Test Local Action |
| 145 | + id: inference |
| 146 | + uses: actions/ai-inference@v1 |
| 147 | + with: |
| 148 | + prompt: 'Hello!' |
| 149 | +
|
| 150 | + - name: Use Response File |
| 151 | + run: | |
| 152 | + echo "Response saved to: ${{ steps.inference.outputs.response-file }}" |
| 153 | + cat "${{ steps.inference.outputs.response-file }}" |
| 154 | +``` |
| 155 | + |
| 156 | +### GitHub MCP Integration (Model Context Protocol) |
| 157 | + |
| 158 | +This action now supports **read-only** integration with the GitHub-hosted Model |
| 159 | +Context Protocol (MCP) server, which provides access to GitHub tools like |
| 160 | +repository management, issue tracking, and pull request operations. |
| 161 | + |
| 162 | +```yaml |
| 163 | +steps: |
| 164 | + - name: AI Inference with GitHub Tools |
| 165 | + id: inference |
| 166 | + |
| 167 | + with: |
| 168 | + prompt: 'List my open pull requests and create a summary' |
| 169 | + enable-github-mcp: true |
| 170 | + token: ${{ secrets.USER_PAT }} |
| 171 | +``` |
| 172 | + |
| 173 | +When MCP is enabled, the AI model will have access to GitHub tools and can |
| 174 | +perform actions like searching issues and PRs. |
| 175 | + |
| 176 | +**Note:** For now, MCP integration cannot be used with the built-in token. You |
| 177 | +must pass a GitHub PAT into `token:` instead. |
| 178 | + |
| 179 | +## Inputs |
| 180 | + |
| 181 | +Various inputs are defined in [`action.yml`](action.yml) to let you configure |
| 182 | +the action: |
| 183 | + |
| 184 | +| Name | Description | Default | |
| 185 | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ | |
| 186 | +| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` | |
| 187 | +| `prompt` | The prompt to send to the model | N/A | |
| 188 | +| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` | |
| 189 | +| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` | |
| 190 | +| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` | |
| 191 | +| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` | |
| 192 | +| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4o` | |
| 193 | +| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` | |
| 194 | +| `max-tokens` | The max number of tokens to generate | 200 | |
| 195 | +| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` | |
| 196 | + |
| 197 | +## Outputs |
| 198 | + |
| 199 | +The AI inference action provides the following outputs: |
| 200 | + |
| 201 | +| Name | Description | |
| 202 | +| --------------- | ----------------------------------------------------------------------- | |
| 203 | +| `response` | The response from the model | |
| 204 | +| `response-file` | The file path where the response is saved (useful for larger responses) | |
| 205 | + |
| 206 | +## Required Permissions |
| 207 | + |
| 208 | +In order to run inference with GitHub Models, the GitHub AI inference action |
| 209 | +requires `models` permissions. |
| 210 | + |
| 211 | +```yml |
| 212 | +permissions: |
| 213 | + contents: read |
| 214 | + models: read |
| 215 | +``` |
| 216 | + |
| 217 | +## Publishing a New Release |
| 218 | + |
| 219 | +This project includes a helper script, [`script/release`](./script/release) |
| 220 | +designed to streamline the process of tagging and pushing new releases for |
| 221 | +GitHub Actions. For more information, see |
| 222 | +[Versioning](https://github.com/actions/toolkit/blob/master/docs/action-versioning.md) |
| 223 | +in the GitHub Actions toolkit. |
| 224 | + |
| 225 | +GitHub Actions allows users to select a specific version of the action to use, |
| 226 | +based on release tags. This script simplifies this process by performing the |
| 227 | +following steps: |
| 228 | + |
| 229 | +1. **Retrieving the latest release tag:** The script starts by fetching the most |
| 230 | + recent SemVer release tag of the current branch, by looking at the local data |
| 231 | + available in your repository. |
| 232 | +1. **Prompting for a new release tag:** The user is then prompted to enter a new |
| 233 | + release tag. To assist with this, the script displays the tag retrieved in |
| 234 | + the previous step, and validates the format of the inputted tag (vX.X.X). The |
| 235 | + user is also reminded to update the version field in package.json. |
| 236 | +1. **Tagging the new release:** The script then tags a new release and syncs the |
| 237 | + separate major tag (e.g. v1, v2) with the new release tag (e.g. v1.0.0, |
| 238 | + v2.1.2). When the user is creating a new major release, the script |
| 239 | + auto-detects this and creates a `releases/v#` branch for the previous major |
| 240 | + version. |
| 241 | +1. **Pushing changes to remote:** Finally, the script pushes the necessary |
| 242 | + commits, tags and branches to the remote repository. From here, you will need |
| 243 | + to create a new release in GitHub so users can easily reference the new tags |
| 244 | + in their workflows. |
| 245 | + |
| 246 | +## License |
| 247 | + |
| 248 | +This project is licensed under the terms of the MIT open source license. Please |
| 249 | +refer to [MIT](./LICENSE.txt) for the full terms. |
| 250 | + |
| 251 | +## Contributions |
| 252 | + |
| 253 | +Contributions are welcome! See the [Contributor's Guide](CONTRIBUTING.md). |
0 commit comments