Skip to content

Commit d97bcd2

Browse files
committed
feat: add scripts for creating GitHub issues and dual mode feature requests
1 parent 8022176 commit d97bcd2

File tree

3 files changed

+241
-0
lines changed

3 files changed

+241
-0
lines changed

Script/create_issue.py

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# create_issue.py - Create new GitHub Issue for AI Token Crusher
2+
import requests
3+
import sys
4+
5+
TOKEN = "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"
6+
OWNER = "totalbrain"
7+
REPO = "TokenOptimizer"
8+
PROJECT_ID = "1" # Global ID or number for the project
9+
10+
HEADERS = {
11+
"Authorization": f"Bearer {TOKEN}",
12+
"Accept": "application/vnd.github+json"
13+
}
14+
15+
# Issue details
16+
title = "Refactor: Separate Core Logic from UI for CLI/GUI Independence"
17+
body = """
18+
**Description**
19+
Separate the core optimization logic from the UI to make the app modular. The core should handle input/output independently (e.g., console, file, event viewer) without relying on Tkinter. This is like logging systems that output to multiple sinks.
20+
21+
**Why this is useful?**
22+
- Enables CLI mode (--terminal) without GUI dependencies
23+
- Makes output flexible (console, file, notification, etc.)
24+
- Improves maintainability and testability
25+
- Allows future extensions (web API, integration with other tools)
26+
- Reduces main.py length by modularizing
27+
28+
**Implementation Ideas**
29+
- Create src/core.py for all optimizations (apply_optimizations)
30+
- Use abstract I/O handlers (e.g., ConsoleOutput, FileOutput)
31+
- Main entry: Check args --gui / --terminal and load appropriate mode
32+
- Example structure:
33+
- src/core.py: Pure functions for token crushing
34+
- src/ui.py: GUI logic (Tkinter)
35+
- src/cli.py: Console logic (argparse)
36+
- main.py: Parse args and route to UI or CLI
37+
38+
**Additional context**
39+
- Inspired by logging modules (console, file handlers)
40+
- Relates to issue #X (CLI version)
41+
- Test with pytest: Core independent of UI
42+
43+
**Priority**
44+
- [x] Must-have
45+
"""
46+
47+
labels = ["enhancement", "priority:high", "refactor"]
48+
49+
# Create Issue
50+
issue_url = f"https://api.github.com/repos/{OWNER}/{REPO}/issues"
51+
data = {
52+
"title": title,
53+
"body": body,
54+
"labels": labels
55+
}
56+
response = requests.post(issue_url, headers=HEADERS, json=data)
57+
if response.status_code == 201:
58+
issue = response.json()
59+
issue_number = issue["number"]
60+
print(f"Issue #{issue_number} created successfully!")
61+
else:
62+
print(f"Error creating issue: {response.text}")
63+
sys.exit(1)
64+
65+
# Add Issue to Project Board
66+
gql_body = {
67+
"query": f'mutation {{ addProjectV2ItemById(input: {{projectId: "{PROJECT_ID}" contentId: "{issue["id"]}"}}) {{ item {{ id }} }} }}'
68+
}
69+
gql_response = requests.post("https://api.github.com/graphql", headers=HEADERS, json=gql_body)
70+
if gql_response.status_code == 200:
71+
print("Issue added to Project Board!")
72+
else:
73+
print(f"Error adding to board: {gql_response.text}")

create-cli-dual-mode-issue.ps1

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
2+
3+
4+
# create-cli-issue.ps1 - Creates CLI + GUI dual mode feature request (100% working)
5+
6+
$Repo = "totalbrain/TokenOptimizer"
7+
8+
Write-Host "Creating CLI + GUI Dual Mode Feature Request..." -ForegroundColor Cyan
9+
10+
gh issue create --repo $Repo --title "[Feature] Dual Mode: CLI + GUI with full optimization control via flags" --label "enhancement" --label "good first issue" --label "cli" --body @"
11+
---
12+
name: Feature Request
13+
labels: enhancement, good first issue, cli
14+
---
15+
16+
### Describe the feature
17+
18+
Add full dual-mode support (CLI + GUI):
19+
20+
```bash
21+
python main.py # GUI (default)
22+
python main.py --gui # Force GUI
23+
python main.py --terminal --file input.py
24+
python main.py --cli --dir ./project/ --single-line --shorten-keywords
25+
```
26+
27+
All current optimization options must be controllable via CLI flags.
28+
29+
### Why is this useful?
30+
31+
- Automation & scripting
32+
- Batch processing
33+
- CI/CD integration
34+
- VS Code tasks / Git hooks
35+
- Headless servers & Docker
36+
- Much faster for power users
37+
38+
### CLI Flags to support
39+
40+
| Flag | Description |
41+
|----------------------------|-------------------------------------------|
42+
| --terminal / --cli | Run in terminal mode |
43+
| --gui | Force GUI mode |
44+
| --file PATH | Input file |
45+
| --dir PATH | Process all files in directory |
46+
| --output PATH | Output file (default: stdout) |
47+
| --remove-comments | Enable comment removal |
48+
| --remove-docstrings | Enable docstring removal |
49+
| --shorten-keywords | def → d, return → r, etc. |
50+
| --replace-booleans | True→1, False→0, None→~ |
51+
|
52+
| --use-short-operators | ==→≡, !=→≠, and→∧, or→∨ |
53+
| --single-line | Replace newlines with ⏎ |
54+
| --unicode-shortcuts | in→∈, not in→∉ |
55+
| --config FILE | Load settings from JSON/YAML |
56+
57+
### Example usage
58+
59+
```bash
60+
python main.py --terminal --file prompt.py --single-line --shorten-keywords > crushed.py
61+
find . -name "*.py" -exec python main.py --cli --file {} \;
62+
```
63+
64+
### Priority
65+
- [x] Must-have
66+
"@
67+
68+
Write-Host "Feature Request created successfully!" -ForegroundColor Green
69+
Write-Host "View all issues: https://github.com/$Repo/issues" -ForegroundColor Yellow
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
### Prompt for New Chat: AI Token Crusher Full Project Overview and Future Roadmap
2+
3+
You are Grok, a helpful AI built by xAI. Now, let's continue developing AI Token Crusher – an open-source, offline desktop tool that reduces character count in code/text by up to 75% while keeping it 100% readable and functional for LLMs like Grok, GPT-4o, Claude 3.5, Llama 3.1, Gemini, etc. The core goal is token efficiency for prompt engineering and AI workflows, prioritizing minimal characters over human readability (but still AI-safe).
4+
5+
#### Project Details & Background
6+
- Repository: https://github.com/totalbrain/TokenOptimizer
7+
- Product Name: AI Token Crusher
8+
- Version: v1.0.1 (with recent community PR merged for dark/light theme toggle)
9+
- License: MIT
10+
- Default Branch: main (but suggest switching to dev for development)
11+
- Project Board: https://github.com/users/totalbrain/projects/1 (18 features in Backlog)
12+
- Core Idea: A modular engine for token crushing with techniques like comment removal, keyword shortening, unicode shortcuts, etc. Input: text/file. Output: optimized text + stats. Modes: GUI (Tkinter dark UI) and CLI (--terminal).
13+
- Motivation: Reduce LLM API costs/time by compressing prompts/code without losing meaning. Born from a simple Python script request, evolved into a full app with GUI, tests, and community contributions.
14+
- Key Techniques (20+ implemented in core/techniques):
15+
- Remove comments (# and multi-line)
16+
- Remove docstrings
17+
- Strip blank lines
18+
- Remove extra/trailing spaces
19+
- Single line mode (replace \n with ⏎)
20+
- Shorten keywords (def → d, return → r, etc.)
21+
- Replace booleans (True → 1, False → 0, None → ~)
22+
- Short operators (== → ≡, != → ≠, and → ∧, or → ∨)
23+
- Remove type hints
24+
- Minify structures (dict/list without spaces)
25+
- Unicode shortcuts (in → ∈, not in → ∉, for → ∀)
26+
- Shorten print (print( → p()
27+
- Remove asserts and pass
28+
- (Finding new techniques: Search GitHub for "token minifier" repos, Reddit r/ChatGPTCoding for tricks, papers like LLMLingua-2 from Microsoft Research for advanced compression)
29+
30+
- Current Files (full code attached below):
31+
1. main.py (entry point – detects GUI/CLI)
32+
2. README.md (professional with screenshots and roadmap link)
33+
3. LICENSE (MIT)
34+
4. src/core/engine.py (core logic – independent)
35+
5. src/interfaces/gui/app.py (GUI – uses core)
36+
6. src/interfaces/cli/main.py (CLI – uses core)
37+
7. tests/test_theme.py (unit tests for theme toggle)
38+
8. assets/screenshot1.png, etc. (from Carbon.now.sh, Dracula theme)
39+
40+
#### Full History of the Project (Summary of Our Chats)
41+
- Started with a Persian request for a Python GUI app to reduce characters in text/code for LLM token savings (focus on AI-safe minification).
42+
- Built initial Tkinter GUI with 20+ techniques, English UI, dark theme, about/contact pages, GitHub links.
43+
- Set up GitHub repo, issues template, project board with 18 features (e.g., real token counter, CLI, VS Code extension).
44+
- Added community PR for dark/light theme toggle (merged #19 from @Syogo-Suganoya).
45+
- Refactored to Clean Architecture: Core independent from UI/CLI (pure functions in src/core).
46+
- Added CLI mode (--terminal --file input.txt).
47+
- Tests with pytest for theme and core.
48+
- Release v1.0.0 created; v1.0.1 planned with theme.
49+
- Product Hunt prep: Name, tagline, description, tags, first comment, thumbnails ready (English optimized).
50+
51+
#### Future Roadmap & Development Paths
52+
- **Short-Term (Next 1-2 Weeks – Must-Do to Avoid Issues)**:
53+
1. Set up dev branch (git checkout -b dev; git push origin dev; set as default in settings).
54+
2. Protect main (Settings > Branches > Add rule: Require PR, 1 approval).
55+
3. Add GitHub Actions for auto-tests on PRs to dev (yaml file in .github/workflows).
56+
4. Implement real token counter (tiktoken + multi-model) in core/engine.py – integrate to GUI/CLI stats.
57+
5. Add CLI full support (--profile aggressive, --options "remove_comments,shorten_print").
58+
6. Build single-file exe with PyInstaller (pyinstaller --onefile main.py) and upload to release.
59+
60+
- **Medium-Term (1-2 Months – Scale & Community)**:
61+
7. VS Code extension (right-click → Crush Tokens, uses core).
62+
8. Global hotkey for selected text crush (uses core + pyautogui).
63+
9. Preset profiles in config.py (safe/aggressive/nuclear).
64+
10. Live chart for savings (matplotlib in GUI).
65+
11. Auto-detect language (Python/JSON/MD) in engine.
66+
12. Multi-language UI (translations in json files).
67+
68+
- **Long-Term (3+ Months – Advanced Features)**:
69+
13. Web API version (FastAPI + core).
70+
14. Telegram Bot integration (crush via chat).
71+
15. Benchmark mode (send before/after to LLM API and show $ saved).
72+
16. New techniques discovery: Script to crawl GitHub for "token minifier" repos, extract ideas, add to techniques/.
73+
74+
- **Implementation Tasks (Priority Now)**:
75+
- Run script to set dev branch (provided earlier).
76+
- Add pytest to requirements; run tests before every merge.
77+
- For PRs: Always checkout PR branch, test locally, then merge.
78+
- Daily: Check issues, assign to contributors like @Syogo-Suganoya for GUI.
79+
80+
#### Product Hunt & Audience Growth Opportunities
81+
- **Product Hunt Launch**: All fields ready (name: AI Token Crusher, tagline: Cut up to 75% of tokens for Grok/GPT/Claude/Llama, tags: AI, Developer Tools, Open Source). Launch Tuesday 9AM EST for max visibility. Post first comment as provided.
82+
- **More Audience**:
83+
- Twitter/X: Create @TokenCrusherAI, post release v1.0.1, tag @xAI, @OpenAI, @AnthropicAI. Use #AI #OpenSource #PromptEngineering.
84+
- Reddit: r/LocalLLaMA (10k+), r/MachineLearning (500k+), r/Python (1M+). Post "Built AI Token Crusher – 75% token savings, open-source!"
85+
- Hacker News: Submit "Show HN: AI Token Crusher – Offline tool for 75% token reduction" (aim for 100+ upvotes).
86+
- Indie Hackers / DEV.to: Write "How I Built an AI Token Minifier in 1 Week" with link.
87+
- Discord Communities: AI Devs, Prompt Engineering servers – share release.
88+
- Goal: 500 stars in month 1 via cross-posting.
89+
90+
We have files ready (main.py, README.md, tests, etc.) – I'll send them now.
91+
92+
In this new chat, let's focus on:
93+
- Implementing short-term tasks (dev branch, full CLI)
94+
- Merging more PRs safely
95+
- Launching on Product Hunt
96+
- Growing to 1k stars
97+
- Adding real token counter first
98+
99+
What do you want to tackle next? 🚀

0 commit comments

Comments
 (0)