Skip to content

Commit 1780121

Browse files
committed
Support .prompt.yml files
1 parent 0479ac8 commit 1780121

17 files changed

+3899
-139
lines changed

README.md

Lines changed: 87 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -36,17 +36,90 @@ jobs:
3636
3737
### Using a prompt file
3838
39-
You can also provide a prompt file instead of an inline prompt:
39+
You can also provide a prompt file instead of an inline prompt. The action
40+
supports both plain text files and structured `.prompt.yml` files:
4041

4142
```yaml
4243
steps:
43-
- name: Run AI Inference with Prompt File
44+
- name: Run AI Inference with Text File
4445
id: inference
4546
uses: actions/ai-inference@v1
4647
with:
4748
prompt-file: './path/to/prompt.txt'
4849
```
4950

51+
### Using GitHub prompt.yml files
52+
53+
For more advanced use cases, you can use structured `.prompt.yml` files that
54+
support templating, custom models, and JSON schema responses:
55+
56+
```yaml
57+
steps:
58+
- name: Run AI Inference with Prompt YAML
59+
id: inference
60+
uses: actions/ai-inference@v1
61+
with:
62+
prompt-file: './.github/prompts/sample.prompt.yml'
63+
input: |
64+
var1: hello
65+
var2: ${{ steps.some-step.outputs.output }}
66+
var3: |
67+
Lorem Ipsum
68+
Hello World
69+
```
70+
71+
#### Simple prompt.yml example
72+
73+
```yaml
74+
messages:
75+
- role: system
76+
content: Be as concise as possible
77+
- role: user
78+
content: 'Compare {{a}} and {{b}}, please'
79+
model: openai/gpt-4o
80+
```
81+
82+
#### Prompt.yml with JSON schema support
83+
84+
```yaml
85+
messages:
86+
- role: system
87+
content:
88+
You are a helpful assistant that describes animals using JSON format
89+
- role: user
90+
content: |-
91+
Describe a {{animal}}
92+
Use JSON format as specified in the response schema
93+
model: openai/gpt-4o
94+
responseFormat: json_schema
95+
jsonSchema: |-
96+
{
97+
"name": "describe_animal",
98+
"strict": true,
99+
"schema": {
100+
"type": "object",
101+
"properties": {
102+
"name": {
103+
"type": "string",
104+
"description": "The name of the animal"
105+
},
106+
"habitat": {
107+
"type": "string",
108+
"description": "The habitat the animal lives in"
109+
}
110+
},
111+
"additionalProperties": false,
112+
"required": [
113+
"name",
114+
"habitat"
115+
]
116+
}
117+
}
118+
```
119+
120+
Variables in prompt.yml files are templated using `{{variable}}` format and are
121+
supplied via the `input` parameter in YAML format.
122+
50123
### Using a system prompt file
51124

52125
In addition to the regular prompt, you can provide a system prompt file instead
@@ -107,17 +180,18 @@ GitHub permissions for the operations the AI will perform.
107180
Various inputs are defined in [`action.yml`](action.yml) to let you configure
108181
the action:
109182

110-
| Name | Description | Default |
111-
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
112-
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
113-
| `prompt` | The prompt to send to the model | N/A |
114-
| `prompt-file` | Path to a file containing the prompt. If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
115-
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
116-
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
117-
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4.1` |
118-
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
119-
| `max-tokens` | The max number of tokens to generate | 200 |
120-
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
183+
| Name | Description | Default |
184+
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
185+
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
186+
| `prompt` | The prompt to send to the model | N/A |
187+
| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
188+
| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` |
189+
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
190+
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
191+
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
192+
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
193+
| `max-tokens` | The max number of tokens to generate | 200 |
194+
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
121195

122196
## Outputs
123197

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
messages:
2+
- role: system
3+
content:
4+
You are a helpful assistant that describes animals using JSON format
5+
- role: user
6+
content: |-
7+
Describe a {{animal}}
8+
Use JSON format as specified in the response schema
9+
model: openai/gpt-4o
10+
responseFormat: json_schema
11+
jsonSchema: |-
12+
{
13+
"name": "describe_animal",
14+
"strict": true,
15+
"schema": {
16+
"type": "object",
17+
"properties": {
18+
"name": {
19+
"type": "string",
20+
"description": "The name of the animal"
21+
},
22+
"habitat": {
23+
"type": "string",
24+
"description": "The habitat the animal lives in"
25+
},
26+
"characteristics": {
27+
"type": "array",
28+
"items": {
29+
"type": "string"
30+
},
31+
"description": "Key characteristics of the animal"
32+
}
33+
},
34+
"additionalProperties": false,
35+
"required": [
36+
"name",
37+
"habitat",
38+
"characteristics"
39+
]
40+
}
41+
}
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
messages:
2+
- role: system
3+
content: Be as concise as possible
4+
- role: user
5+
content: 'Compare {{a}} and {{b}}, please'
6+
model: openai/gpt-4o
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
import { describe, it, expect } from '@jest/globals'
2+
import {
3+
buildMessages,
4+
buildResponseFormat,
5+
buildInferenceRequest
6+
} from '../src/helpers'
7+
import { PromptConfig } from '../src/prompt'
8+
9+
describe('helpers.ts - inference request building', () => {
10+
describe('buildMessages', () => {
11+
it('should build messages from prompt config', () => {
12+
const promptConfig: PromptConfig = {
13+
messages: [
14+
{ role: 'system', content: 'System message' },
15+
{ role: 'user', content: 'User message' }
16+
]
17+
}
18+
19+
const result = buildMessages(promptConfig)
20+
expect(result).toEqual([
21+
{ role: 'system', content: 'System message' },
22+
{ role: 'user', content: 'User message' }
23+
])
24+
})
25+
26+
it('should build messages from legacy format', () => {
27+
const result = buildMessages(undefined, 'System prompt', 'User prompt')
28+
expect(result).toEqual([
29+
{ role: 'system', content: 'System prompt' },
30+
{ role: 'user', content: 'User prompt' }
31+
])
32+
})
33+
34+
it('should use default system prompt when none provided', () => {
35+
const result = buildMessages(undefined, undefined, 'User prompt')
36+
expect(result).toEqual([
37+
{ role: 'system', content: 'You are a helpful assistant' },
38+
{ role: 'user', content: 'User prompt' }
39+
])
40+
})
41+
})
42+
43+
describe('buildResponseFormat', () => {
44+
it('should build JSON schema response format', () => {
45+
const promptConfig: PromptConfig = {
46+
messages: [],
47+
responseFormat: 'json_schema',
48+
jsonSchema: JSON.stringify({
49+
name: 'test_schema',
50+
schema: { type: 'object' }
51+
})
52+
}
53+
54+
const result = buildResponseFormat(promptConfig)
55+
expect(result).toEqual({
56+
type: 'json_schema',
57+
json_schema: {
58+
name: 'test_schema',
59+
schema: { type: 'object' }
60+
}
61+
})
62+
})
63+
64+
it('should return undefined for text format', () => {
65+
const promptConfig: PromptConfig = {
66+
messages: [],
67+
responseFormat: 'text'
68+
}
69+
70+
const result = buildResponseFormat(promptConfig)
71+
expect(result).toBeUndefined()
72+
})
73+
74+
it('should return undefined when no response format specified', () => {
75+
const promptConfig: PromptConfig = {
76+
messages: []
77+
}
78+
79+
const result = buildResponseFormat(promptConfig)
80+
expect(result).toBeUndefined()
81+
})
82+
83+
it('should throw error for invalid JSON schema', () => {
84+
const promptConfig: PromptConfig = {
85+
messages: [],
86+
responseFormat: 'json_schema',
87+
jsonSchema: 'invalid json'
88+
}
89+
90+
expect(() => buildResponseFormat(promptConfig)).toThrow(
91+
'Invalid JSON schema'
92+
)
93+
})
94+
})
95+
96+
describe('buildInferenceRequest', () => {
97+
it('should build complete inference request from prompt config', () => {
98+
const promptConfig: PromptConfig = {
99+
messages: [
100+
{ role: 'system', content: 'System message' },
101+
{ role: 'user', content: 'User message' }
102+
],
103+
responseFormat: 'json_schema',
104+
jsonSchema: JSON.stringify({
105+
name: 'test_schema',
106+
schema: { type: 'object' }
107+
})
108+
}
109+
110+
const result = buildInferenceRequest(
111+
promptConfig,
112+
undefined,
113+
undefined,
114+
'gpt-4',
115+
100,
116+
'https://api.test.com',
117+
'test-token'
118+
)
119+
120+
expect(result).toEqual({
121+
messages: [
122+
{ role: 'system', content: 'System message' },
123+
{ role: 'user', content: 'User message' }
124+
],
125+
modelName: 'gpt-4',
126+
maxTokens: 100,
127+
endpoint: 'https://api.test.com',
128+
token: 'test-token',
129+
responseFormat: {
130+
type: 'json_schema',
131+
json_schema: {
132+
name: 'test_schema',
133+
schema: { type: 'object' }
134+
}
135+
}
136+
})
137+
})
138+
139+
it('should build inference request from legacy format', () => {
140+
const result = buildInferenceRequest(
141+
undefined,
142+
'System prompt',
143+
'User prompt',
144+
'gpt-4',
145+
100,
146+
'https://api.test.com',
147+
'test-token'
148+
)
149+
150+
expect(result).toEqual({
151+
messages: [
152+
{ role: 'system', content: 'System prompt' },
153+
{ role: 'user', content: 'User prompt' }
154+
],
155+
modelName: 'gpt-4',
156+
maxTokens: 100,
157+
endpoint: 'https://api.test.com',
158+
token: 'test-token',
159+
responseFormat: undefined
160+
})
161+
})
162+
})
163+
})

__tests__/inference.test.ts

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,10 @@ const { simpleInference, mcpInference } = await import('../src/inference.js')
3333

3434
describe('inference.ts', () => {
3535
const mockRequest = {
36-
systemPrompt: 'You are a test assistant',
37-
prompt: 'Hello, AI!',
36+
messages: [
37+
{ role: 'system', content: 'You are a test assistant' },
38+
{ role: 'user', content: 'Hello, AI!' }
39+
],
3840
modelName: 'gpt-4',
3941
maxTokens: 100,
4042
endpoint: 'https://api.test.com',

0 commit comments

Comments
 (0)