Welcome to PromptPunk! Your ultimate AI Playground for crafting, testing, and managing LLM prompts with ease. This tool is designed to help you test and evaluate prompts for various language models, providing insights into performance and cost, allowing you to optimize your usage effectively.
- Interactive Chat Interface: Test your prompts directly and see responses from AI models.
- Multi-Provider Support:
- OpenAI (e.g., GPT-3.5-turbo, GPT-4)
- Google (e.g., Gemini Pro, Gemini Flash)
- Dynamic Model Fetching: Automatically fetches available models from your configured provider once an API key is entered.
- Prompt Management:
- Create, save, and edit prompts.
- Organize prompts, including system prompts and template prompts.
- Use preset system prompts to get started quickly.
- Template Prompts: Define prompts with
{'{\query}'}
placeholders for dynamic user input. - Token Usage & Cost Estimation: Keep track of your input/output tokens and estimated costs.
- Configuration Modal: Easily configure API keys, select models, and providers.
- Conversation Export: Save your chat conversations as
.txt
files. - Dark/Light Mode: Comfortable viewing in any lighting.
- Persistent Configuration: API settings are saved in
localStorage
for your convenience. - Responsive Design: Adapts to different screen sizes.
- Security First: API keys stored locally in browser storage (as mentioned in original).
If your repository contains the Next.js application within the punk-chat
subdirectory, the relevant structure is:
PromptPunk/ (Your Git Repository Root)
βββ punk-chat/
βββ .next/ # Next.js build output (generated)
βββ app/ # Next.js App Router (pages, layouts, actions)
β βββ layout.tsx # Main layout
β βββ page.tsx # Main page component
β βββ actions.ts # Server Actions (e.g., saveConversation)
βββ components/ # React components
β βββ config-modal.tsx # API configuration modal
β βββ prompt-manager.tsx # Prompt management UI
β βββ theme-toggle.tsx # Dark/Light mode toggle
β βββ ui/ # UI primitives (e.g., button, textarea from shadcn/ui)
βββ lib/ # Utility functions and hooks
β βββ tokenizer.ts # Token counting logic
β βββ use-openai.ts # Custom hook for LLM API interaction & state
β βββ utils.ts # General utility functions (e.g., cn for classnames)
βββ public/ # Static assets (e.g., images, favicons)
βββ .gitignore
βββ components.json # shadcn/ui configuration
βββ eslint.config.mjs # ESLint configuration
βββ next-env.d.ts # Next.js TypeScript declarations
βββ next.config.js # Next.js configuration for GitHub Pages
βββ package-lock.json
βββ package.json
βββ postcss.config.mjs # PostCSS configuration
βββ tailwind.config.js # Tailwind CSS configuration (JavaScript version, potentially older)
βββ tailwind.config.ts # Tailwind CSS configuration (TypeScript version, primary)
βββ tsconfig.json # TypeScript configuration
Access the production deployment directly: π PromptPunk
(Note: The Vercel demo might not reflect the latest changes aimed at GitHub Pages deployment unless updated separately.)
-
Clone the repository:
git clone https://github.com/VinsmokeSomya/PromptPunk.git cd PromptPunk
-
Navigate to the application directory:
cd punk-chat
-
Install dependencies:
npm install
(Or
yarn install
) -
Set up API Keys:
- You'll need API keys from OpenAI and/or Google AI Studio to use the respective models.
- Once the application is running (see next step), click on the "API Config" button in the app to enter your keys.
-
Run the development server:
npm run dev
(Or
yarn dev
) -
Open http://localhost:3000 (or the port shown in your terminal, usually 3000) in your browser to see the application.
- Once the app is running, click the "API Config" button (previously "Config").
- Select your provider (OpenAI or Google).
- Enter your API Key.
- Click "Fetch Available Models" to populate the model list.
- Select your target model from the dropdown.
- (Optional) For OpenAI, the Base URL is typically pre-filled but can be adjusted if needed.
- Save configuration.
This project can be deployed as a static site to GitHub Pages.
- Configuration: The
punk-chat/next.config.js
file has been configured for static export withoutput: 'export'
,basePath: '/PromptPunk'
, andassetPrefix: '/PromptPunk'
. - Build: Navigate to the
punk-chat
directory and runnpm run build
. This will create anout
folder. - Deploy: Push the contents of the
punk-chat/out
folder to thegh-pages
branch of yourVinsmokeSomya/PromptPunk
repository. You can use thegh-pages
npm package or a GitHub Action for this.- Example using
gh-pages
package (run frompunk-chat
directory):npm install --save-dev gh-pages # Add to package.json scripts: "deploy": "gh-pages -d out -t true" npm run deploy
- Example using
- Your site should be available at
https://VinsmokeSomya.github.io/PromptPunk
.
The "Save Conversation" feature works by generating a file for download in the browser, so it's compatible with static hosting.
Contributions are welcome! Please follow these steps:
- Fork the repository (
VinsmokeSomya/PromptPunk
) - Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Happy Prompting! π