This system automates the ChatGPT Micro-Cap trading experiment by integrating LLM APIs to generate trading decisions automatically.
- Automated Prompt Generation: Creates daily trading prompts with current portfolio data
- LLM Integration: Supports OpenAI GPT API
- Trade Execution: Parses LLM responses and executes recommended trades
- Risk Management: Includes confidence thresholds and dry-run modes
- Logging: Saves all LLM responses and trading decisions
pip install -r requirements.txt
pip install openai- OpenAI: Get your API key from OpenAI Platform
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"The simple_automation.py script provides automated trading decisions:
# Basic usage with OpenAI
python simple_automation.py --api-key YOUR_OPENAI_KEY
# Using environment variable
export OPENAI_API_KEY="your-key"
python simple_automation.py
# Dry run (no actual trades executed)
python simple_automation.py --dry-run
# Custom model
python simple_automation.py --model gpt-3.5-turbo- Loads current portfolio state from CSV files
- Calculates cash balance and total equity
- Formats holdings data for LLM consumption
- Creates structured prompts with portfolio data
- Includes trading rules and constraints
- Requests JSON-formatted responses
- Sends prompts to chosen LLM API
- Receives trading recommendations
- Parses JSON responses for trade details
- Validates trade recommendations
- Checks cash availability and position limits
- Executes approved trades (or logs in dry-run mode)
- Temperature: 0.3 (lower for more consistent decisions)
- Max Tokens: 1500-2000 (adjust based on model)
- Model: GPT-4 recommended for best results
- Maximum position size: 10% of portfolio
- Minimum confidence threshold: 70%
- Cash reserve: Minimum $500
- Micro-cap focus: <$300M market cap
llm_responses.jsonl: All LLM interactions and responsesautomated_trades.jsonl: All automated trading decisions
- Uses existing CSV files from the original trading script
- Maintains compatibility with manual trading system
Always test with --dry-run first to see what trades would be executed:
python simple_automation.py --dry-runThe system includes confidence scoring to avoid low-quality recommendations.
- Graceful handling of API failures
- JSON parsing error recovery
- Invalid trade validation
The automation system is designed to work alongside the existing manual trading system:
- Same Data Files: Uses the same CSV files as the manual system
- Compatible Format: Maintains the same portfolio and trade log formats
- Fallback Support: Can switch between automated and manual modes
# 1. Test the system with dry run
python simple_automation.py --dry-run
# 2. Run automated trading
python simple_automation.py
# 3. Review results
cat "Start Your Own/llm_responses.jsonl" | tail -1
# 4. Check portfolio updates
python "Start Your Own/Trading_Script.py"-
API Key Not Found
export OPENAI_API_KEY="your-key"
-
JSON Parsing Errors
- Check LLM response format
- Verify model supports structured output
- Try different temperature settings
-
Trade Execution Failures
- Check cash availability
- Verify ticker symbols
- Review position size limits
Add verbose logging by modifying the scripts to print more details about the LLM interactions.
Modify the generate_trading_prompt() function to customize the prompts sent to the LLM.
Experiment with different models:
gpt-4: Best performance, higher costgpt-3.5-turbo: Good performance, lower cost
Adjust risk parameters in the configuration or modify the validation logic.
- Never commit API keys to version control
- Use environment variables for sensitive data
- Consider rate limiting for API calls
- Monitor API usage and costs
For issues or questions:
- Check the troubleshooting section
- Review the LLM response logs
- Test with dry-run mode first
- Verify API key permissions and quotas