Skip to content

Commit c4afc1c

Browse files
author
Jeremy Dai
authored
Jeremy/tech 44 mcpm automate json creation (#11)
* test * Restrict workflow to custom user list
1 parent b08a24b commit c4afc1c

File tree

2 files changed

+83
-0
lines changed

2 files changed

+83
-0
lines changed

.github/workflows/auto-update.yml

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
name: Create Server JSON and Update Repo
2+
3+
on:
4+
issues:
5+
types: [labeled]
6+
7+
jobs:
8+
create-json-and-update:
9+
runs-on: ubuntu-latest
10+
if: github.event.label.name == 'new-server'
11+
steps:
12+
- name: Check if user is authorized
13+
env:
14+
SENDER: ${{ github.event.sender.login }}
15+
run: |
16+
# Custom list of authorized users (GitHub usernames)
17+
AUTHORIZED_USERS="jeremy-dai-txyz JoJoJoJoJoJoJo niechen"
18+
if echo "$AUTHORIZED_USERS" | grep -q -w "$SENDER"; then
19+
echo "User $SENDER is authorized"
20+
else
21+
echo "User $SENDER is not authorized"
22+
exit 1 # Fail the workflow if unauthorized
23+
fi
24+
25+
- name: Checkout repository
26+
uses: actions/checkout@v4
27+
28+
- name: Set up Python
29+
uses: actions/setup-python@v5
30+
with:
31+
python-version: '3.x'
32+
33+
- name: Install dependencies
34+
run: |
35+
python -m pip install --upgrade pip
36+
pip install requests openai
37+
38+
- name: Run the script
39+
env:
40+
ISSUE_BODY: ${{ github.event.issue.body }}
41+
run: |
42+
python scripts/get_manifest.py "$ISSUE_BODY"
43+
44+
- name: Commit the new content
45+
run: |
46+
git config user.name "GitHub Action"
47+
git config user.email "[email protected]"
48+
git add .
49+
git commit -m "Update repo with JSON content from issue #${{ github.event.issue.number }}" || echo "No changes to commit"
50+
git push

scripts/get_manifest.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
import sys
2+
import requests
3+
import json
4+
import openai
5+
6+
7+
def scrape_url(url):
8+
try:
9+
response = requests.get(url, timeout=10)
10+
response.raise_for_status()
11+
content = response.json()
12+
13+
# for testing
14+
with open('scraped_content.txt', 'w', encoding='utf-8') as f:
15+
f.write(content)
16+
print(f"Scraped content from {url}")
17+
except Exception as e:
18+
print(f"Error scraping {url}: {e}")
19+
sys.exit(1)
20+
21+
22+
if __name__ == "__main__":
23+
issue_body = sys.argv[1]
24+
# Extract URL from issue body (assumes link is on its own line)
25+
# TODO: need a more robust way to extract the URL
26+
for line in issue_body.split('\n'):
27+
line = line.strip()
28+
if line.startswith('http://') or line.startswith('https://'):
29+
scrape_url(line)
30+
break
31+
else:
32+
print("No valid URL found in issue body")
33+
sys.exit(1)

0 commit comments

Comments
 (0)