Skip to content

Commit 4fd6464

Browse files
committed
Add read-only MCP support
1 parent 86c0691 commit 4fd6464

File tree

13 files changed

+182
-802
lines changed

13 files changed

+182
-802
lines changed

README.md

Lines changed: 18 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -82,9 +82,9 @@ steps:
8282
8383
### GitHub MCP Integration (Model Context Protocol)
8484
85-
This action now supports integration with the GitHub-hosted Model Context
86-
Protocol (MCP) server, which provides access to GitHub tools like repository
87-
management, issue tracking, and pull request operations.
85+
This action now supports **read-only** integration with the GitHub-hosted Model
86+
Context Protocol (MCP) server, which provides access to GitHub tools like
87+
repository management, issue tracking, and pull request operations.
8888
8989
```yaml
9090
steps:
@@ -93,39 +93,31 @@ steps:
9393
uses: actions/ai-inference@v1
9494
with:
9595
prompt: 'List my open pull requests and create a summary'
96-
enable-mcp: true
97-
mcp-server-url: 'https://github-mcp-server.fly.dev/mcp' # Optional, this is the default
96+
enable-github-mcp: true
9897
```
9998
10099
When MCP is enabled, the AI model will have access to GitHub tools and can
101-
perform actions like:
100+
perform actions like searching issues and PRs.
102101
103-
- Listing and managing repositories
104-
- Creating, reading, and updating issues
105-
- Managing pull requests
106-
- Searching code and repositories
107-
- And more GitHub operations
108-
109-
**Note:** MCP integration requires appropriate GitHub permissions for the
110-
operations the AI will perform.
102+
**Note:** MCP integration requires your workflow token to have appropriate
103+
GitHub permissions for the operations the AI will perform.
111104
112105
## Inputs
113106
114107
Various inputs are defined in [`action.yml`](action.yml) to let you configure
115108
the action:
116109

117-
| Name | Description | Default |
118-
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
119-
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
120-
| `prompt` | The prompt to send to the model | N/A |
121-
| `prompt-file` | Path to a file containing the prompt. If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
122-
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
123-
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
124-
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `gpt-4o` |
125-
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
126-
| `max-tokens` | The max number of tokens to generate | 200 |
127-
| `enable-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
128-
| `mcp-server-url` | URL of the MCP server to connect to for GitHub tools | `https://github-mcp-server.fly.dev/mcp` |
110+
| Name | Description | Default |
111+
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
112+
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
113+
| `prompt` | The prompt to send to the model | N/A |
114+
| `prompt-file` | Path to a file containing the prompt. If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
115+
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
116+
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
117+
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `gpt-4o` |
118+
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
119+
| `max-tokens` | The max number of tokens to generate | 200 |
120+
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
129121

130122
## Outputs
131123

__tests__/helpers.test.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ describe('helpers.ts', () => {
133133
it('handles undefined inputs correctly', () => {
134134
const defaultValue = 'Default content'
135135

136+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
136137
core.getInput.mockImplementation(() => undefined as any)
137138

138139
const result = loadContentFromFileOrInput(

__tests__/inference.test.ts

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ import { jest } from '@jest/globals'
55
import * as core from '../__fixtures__/core.js'
66

77
// Mock Azure AI Inference
8+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
89
const mockPost = jest.fn() as jest.MockedFunction<any>
910
const mockPath = jest.fn(() => ({ post: mockPost }))
1011
const mockClient = jest.fn(() => ({ path: mockPath }))
@@ -19,6 +20,7 @@ jest.unstable_mockModule('@azure/core-auth', () => ({
1920
}))
2021

2122
// Mock MCP functions
23+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
2224
const mockExecuteToolCalls = jest.fn() as jest.MockedFunction<any>
2325
jest.unstable_mockModule('../src/mcp.js', () => ({
2426
executeToolCalls: mockExecuteToolCalls
@@ -112,10 +114,11 @@ describe('inference.ts', () => {
112114

113115
describe('mcpInference', () => {
114116
const mockMcpClient = {
117+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
115118
client: {} as any,
116119
tools: [
117120
{
118-
type: 'function',
121+
type: 'function' as const,
119122
function: {
120123
name: 'test-tool',
121124
description: 'A test tool',
@@ -144,15 +147,18 @@ describe('inference.ts', () => {
144147
const result = await mcpInference(mockRequest, mockMcpClient)
145148

146149
expect(result).toBe('Hello, user!')
147-
expect(core.info).toHaveBeenCalledWith('Running MCP inference with tools')
150+
expect(core.info).toHaveBeenCalledWith(
151+
'Running GitHub MCP inference with tools'
152+
)
148153
expect(core.info).toHaveBeenCalledWith('MCP inference iteration 1')
149154
expect(core.info).toHaveBeenCalledWith(
150-
'No tool calls requested, ending MCP inference loop'
155+
'No tool calls requested, ending GitHub MCP inference loop'
151156
)
152157

153158
// The MCP inference loop will always add the assistant message, even when there are no tool calls
154159
// So we don't check the exact messages, just that tools were included
155160
expect(mockPost).toHaveBeenCalledTimes(1)
161+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
156162
const callArgs = mockPost.mock.calls[0][0] as any
157163
expect(callArgs.body.tools).toEqual(mockMcpClient.tools)
158164
expect(callArgs.body.model).toBe('gpt-4')
@@ -223,6 +229,7 @@ describe('inference.ts', () => {
223229
expect(mockPost).toHaveBeenCalledTimes(2)
224230

225231
// Verify the second call includes the conversation history
232+
// eslint-disable-next-line @typescript-eslint/no-explicit-any
226233
const secondCall = mockPost.mock.calls[1][0] as any
227234
expect(secondCall.body.messages).toHaveLength(5) // system, user, assistant, tool, assistant
228235
expect(secondCall.body.messages[2].role).toBe('assistant')
@@ -271,7 +278,7 @@ describe('inference.ts', () => {
271278

272279
expect(mockPost).toHaveBeenCalledTimes(5) // Max iterations reached
273280
expect(core.warning).toHaveBeenCalledWith(
274-
'MCP inference loop exceeded maximum iterations (5)'
281+
'GitHub MCP inference loop exceeded maximum iterations (5)'
275282
)
276283
expect(result).toBe('Using tool again.') // Last assistant message
277284
})
@@ -296,7 +303,7 @@ describe('inference.ts', () => {
296303

297304
expect(result).toBe('Hello, user!')
298305
expect(core.info).toHaveBeenCalledWith(
299-
'No tool calls requested, ending MCP inference loop'
306+
'No tool calls requested, ending GitHub MCP inference loop'
300307
)
301308
expect(mockExecuteToolCalls).not.toHaveBeenCalled()
302309
})

0 commit comments

Comments
 (0)