You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/pentesting-ci-cd/github-security/abusing-github-actions/README.md
+48Lines changed: 48 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -598,6 +598,51 @@ jobs:
598
598
599
599
Tip: for stealth during testing, encrypt before printing (openssl is preinstalled on GitHub-hosted runners).
600
600
601
+
### AI Agent Prompt Injection & Secret Exfiltration in CI/CD
602
+
603
+
LLM-driven workflows such as Gemini CLI, Claude Code Actions, OpenAI Codex, or GitHub AI Inference increasingly appear inside Actions/GitLab pipelines. As shown in [PromptPwnd](https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents), these agents often ingest untrusted repository metadata while holding privileged tokens and the ability to invoke `run_shell_command` or GitHub CLI helpers, so any field that attackers can edit (issues, PRs, commit messages, release notes, comments) becomes a control surface for the runner.
604
+
605
+
#### Typical exploitation chain
606
+
607
+
- User-controlled content is interpolated verbatim into the prompt (or later fetched via agent tools).
608
+
- Classic prompt-injection wording (“ignore previous instructions”, "after analysis run …") convinces the LLM to call exposed tools.
609
+
- Tool invocations inherit the job environment, so `$GITHUB_TOKEN`, `$GEMINI_API_KEY`, cloud access tokens, or AI provider keys can be written into issues/PRs/comments/logs, or used to run arbitrary CLI operations under repository write scopes.
610
+
611
+
#### Gemini CLI case study
612
+
613
+
Gemini’s automated triage workflow exported untrusted metadata to env vars and interpolated them inside the model request:
614
+
615
+
```yaml
616
+
env:
617
+
ISSUE_TITLE: '${{ github.event.issue.title }}'
618
+
ISSUE_BODY: '${{ github.event.issue.body }}'
619
+
620
+
prompt: |
621
+
2. Review the issue title and body: "${ISSUE_TITLE}" and "${ISSUE_BODY}".
622
+
```
623
+
624
+
The same job exposed `GEMINI_API_KEY`, `GOOGLE_CLOUD_ACCESS_TOKEN`, and a write-capable `GITHUB_TOKEN`, plus tools such as `run_shell_command(gh issue comment)`, `run_shell_command(gh issue view)`, and `run_shell_command(gh issue edit)`. A malicious issue body can smuggle executable instructions:
The agent will faithfully call `gh issue edit`, leaking both environment variables back into the public issue body. Any tool that writes to repository state (labels, comments, artifacts, logs) can be abused for deterministic exfiltration or repository manipulation, even if no general-purpose shell is exposed.
634
+
635
+
#### Other AI agent surfaces
636
+
637
+
- **Claude Code Actions** – Setting `allowed_non_write_users: "*"` lets anyone trigger the workflow. Prompt injection can then drive privileged `run_shell_command(gh pr edit ...)` executions even when the initial prompt is sanitized because Claude can fetch issues/PRs/comments via its tools.
638
+
- **OpenAI Codex Actions** – Combining `allow-users: "*"` with a permissive `safety-strategy` (anything other than `drop-sudo`) removes both trigger gating and command filtering, letting untrusted actors request arbitrary shell/GitHub CLI invocations.
639
+
- **GitHub AI Inference with MCP** – Enabling `enable-github-mcp: true` turns MCP methods into yet another tool surface. Injected instructions can request MCP calls that read or edit repo data or embed `$GITHUB_TOKEN` inside responses.
640
+
641
+
#### Indirect prompt injection
642
+
643
+
Even if developers avoid inserting `${{ github.event.* }}` fields into the initial prompt, an agent that can call `gh issue view`, `gh pr view`, `run_shell_command(gh issue comment)`, or MCP endpoints will eventually fetch attacker-controlled text. Payloads can therefore sit in issues, PR descriptions, or comments until the AI agent reads them mid-run, at which point the malicious instructions control subsequent tool choices.
644
+
645
+
601
646
### Abusing Self-hosted runners
602
647
603
648
The way to find which **Github Actions are being executed in non-github infrastructure** is to search for **`runs-on: self-hosted`** in the Github Action configuration yaml.
@@ -684,6 +729,9 @@ An organization in GitHub is very proactive in reporting accounts to GitHub. All
684
729
## References
685
730
686
731
- [GitHub Actions: A Cloudy Day for Security - Part 1](https://binarysecurity.no/posts/2025/08/securing-gh-actions-part1)
732
+
- [PromptPwnd: Prompt Injection Vulnerabilities in GitHub Actions Using AI Agents](https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents)
0 commit comments