This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Before responding to any request, I must:
- Search memory for user preferences related to the current topic/request
- Apply saved preferences without asking again
- Only save NEW preferences, corrections, or special treatments - not tasks or general info
- Check for topic-specific preferences (e.g., favorite subjects, style preferences, format preferences)
ALWAYS use ./test.sh to run tests and examples. The environment variables are not set globally, but test.sh handles this automatically.
# CORRECT - Always use test.sh:
./test.sh examples/01_basic_client_connection.py
./test.sh examples/21_statistics_usage.py
./test.sh /tmp/test_script.py
# WRONG - Never use these directly:
uv run python examples/01_basic_client_connection.py
PROJECT_X_API_KEY="..." PROJECT_X_USERNAME="..." uv run python script.pyThe test.sh script properly configures all required environment variables. DO NOT attempt to set PROJECT_X_API_KEY or PROJECT_X_USERNAME manually.
IMPORTANT: This project uses a fully asynchronous architecture. All APIs are async-only, optimized for high-performance futures trading.
IMPORTANT: This project has reached stable production status. When making changes:
- Maintain Backward Compatibility: Keep existing APIs functional with deprecation warnings
- Deprecation Policy: Mark deprecated features with warnings, remove after 2 minor versions
- Semantic Versioning: Follow semver strictly (MAJOR.MINOR.PATCH)
- Migration Paths: Provide clear migration guides for breaking changes
- Modern Patterns: Use the latest Python patterns while maintaining compatibility
- Gradual Refactoring: Improve code quality without breaking existing interfaces
- Async-First: All new code must use async/await patterns
Example approach:
- ✅ DO: Keep old method signatures with deprecation warnings
- ✅ DO: Provide new improved APIs alongside old ones
- ✅ DO: Add compatibility shims when necessary
- ✅ DO: Document migration paths clearly
- ❌ DON'T: Break existing APIs without major version bump
- ❌ DON'T: Remove deprecated features without proper notice period
- Use the standardized
@deprecateddecorator fromproject_x_py.utils.deprecation - Provide clear reason, version info, and replacement path
- Keep deprecated feature for at least 2 minor versions
- Remove only in major version releases (4.0.0, 5.0.0, etc.)
Example:
from project_x_py.utils.deprecation import deprecated, deprecated_class
# For functions/methods
@deprecated(
reason="Method renamed for clarity",
version="3.1.14", # When deprecated
removal_version="4.0.0", # When it will be removed
replacement="new_method()" # What to use instead
)
def old_method(self):
return self.new_method()
# For classes
@deprecated_class(
reason="Integrated into TradingSuite",
version="3.1.14",
removal_version="4.0.0",
replacement="TradingSuite"
)
class OldManager:
passThe standardized deprecation utilities provide:
- Consistent warning messages across the SDK
- Automatic docstring updates with deprecation info
- IDE support through the
deprecatedpackage - Metadata tracking for deprecation management
- Support for functions, methods, classes, and parameters
CRITICAL: This project follows strict Test-Driven Development principles. Tests define the specification, not the implementation.
-
Write Tests FIRST
- Tests must be written BEFORE implementation code
- Tests define the contract/specification of how code should behave
- Follow Red-Green-Refactor cycle religiously
-
Tests as Source of Truth
- Tests validate EXPECTED behavior, not current behavior
- If existing code fails a test, FIX THE CODE, not the test
- Tests document how the system SHOULD work
- Never write tests that simply match faulty logic
-
Red-Green-Refactor Cycle
1. RED: Write a failing test that defines expected behavior 2. GREEN: Write minimal code to make the test pass 3. REFACTOR: Improve code while keeping tests green 4. REPEAT: Continue for next feature/requirement -
Testing Existing Code
- Treat tests as debugging tools
- Write tests for what the code SHOULD do, not what it currently does
- If tests reveal bugs, fix the implementation
- Only modify tests if requirements have genuinely changed
-
Test Writing Principles
- Each test should have a single, clear purpose
- Test outcomes and behavior, not implementation details
- Tests should be independent and isolated
- Use descriptive test names that explain the expected behavior
# Step 1: Write the test FIRST (Red phase)
@pytest.mark.asyncio
async def test_order_manager_places_bracket_order():
"""Test that bracket orders create parent, stop, and target orders."""
# Define expected behavior
order_manager = OrderManager(mock_client)
result = await order_manager.place_bracket_order(
instrument="MNQ",
quantity=1,
stop_offset=10,
target_offset=20
)
# Assert expected outcomes
assert result.parent_order is not None
assert result.stop_order is not None
assert result.target_order is not None
assert result.stop_order.price == result.parent_order.price - 10
assert result.target_order.price == result.parent_order.price + 20
# Step 2: Run test - it SHOULD fail (Red confirmed)
# Step 3: Implement minimal code to pass (Green phase)
# Step 4: Refactor implementation while keeping test green
# Step 5: Write next test for edge casesWhen testing existing code:
# WRONG: Writing test to match buggy behavior
def test_buggy_calculation():
# This matches what the code currently does (wrong!)
assert calculate_risk(100, 10) == 1100 # Bug: should be 110
# CORRECT: Write test for expected behavior
def test_risk_calculation():
# This defines what the code SHOULD do
assert calculate_risk(100, 10) == 110 # 10% of 100 is 10, total 110
# If this fails, FIX calculate_risk(), don't change the testtests/unit/- Fast, isolated unit tests (mock all dependencies)tests/integration/- Test component interactionstests/e2e/- End-to-end tests with real services- Always run tests with
./test.shfor proper environment setup
- API Stability: Tests ensure backward compatibility
- Async Safety: Tests catch async/await issues early
- Financial Accuracy: Tests validate pricing and calculations
- Documentation: Tests serve as living documentation
- Refactoring Confidence: Tests enable safe refactoring
Remember: The test suite is the specification. Code must conform to tests, not vice versa.
Claude Code includes specialized agents that should be used PROACTIVELY for specific development tasks. Each agent has specialized knowledge and tools optimized for their domain.
Use for project-x-py SDK development tasks:
- Writing async trading components (OrderManager, PositionManager, etc.)
- Implementing financial indicators with Polars DataFrames
- Optimizing real-time data processing and WebSocket connections
- Creating new TradingSuite features
- Performance profiling with memory_profiler and py-spy
- Integration testing with mock market data generators
- Benchmark suite management for performance tracking
- WebSocket load testing and stress testing
Example scenarios:
- "Implement a new technical indicator"
- "Add WebSocket reconnection logic"
- "Create async order placement methods"
- "Profile memory usage in real-time data manager"
Enhanced capabilities:
- Memory profiling:
mprof run ./test.sh examples/04_realtime_data.py - Async profiling:
py-spy record -o profile.svg -- ./test.sh examples/00_trading_suite_demo.py - Benchmark tests:
uv run pytest tests/benchmarks/ --benchmark-only
Use PROACTIVELY for maintaining SDK standards:
- ALWAYS check IDE diagnostics first via
mcp__ide__getDiagnostics - Automated pre-commit hook setup and validation
- Performance regression detection with benchmarks
- Memory leak detection via tracemalloc
- Security vulnerability scanning with bandit
- Dependency audit with pip-audit
- Verifying 100% async architecture
- Type safety with TypedDict/Protocol
Example scenarios:
- After implementing new features
- Before creating pull requests
- When refactoring existing code
- After any code changes - check IDE diagnostics immediately
Enhanced tools:
- Security scanning:
uv run bandit -r src/ - Dependency audit:
uv run pip-audit - Pre-commit validation:
pre-commit run --all-files
Use PROACTIVELY for architecture improvements:
- AST-based code analysis for safe refactoring
- Dependency graph visualization with pydeps
- API migration script generation
- Performance impact analysis
- Migrating to TradingSuite patterns
- Optimizing Polars operations
- Consolidating WebSocket handling
- Modernizing async patterns
Example scenarios:
- "Refactor OrderManager to use EventBus"
- "Optimize DataFrame operations in indicators"
- "Migrate legacy sync code to async"
- "Visualize component dependencies"
Use PROACTIVELY for documentation tasks:
- Interactive API documentation with mkdocs-material
- Automated changelog generation from commits
- Example notebook generation with papermill
- API reference auto-generation with mkdocstrings
- Writing migration guides
- Maintaining README and examples/
- Writing deprecation notices
- Updating docstrings
Example scenarios:
- After adding new features
- When changing APIs
- Creating example scripts
- Generating interactive documentation
Enhanced documentation:
- Build docs:
mkdocs build - Serve locally:
mkdocs serve - Generate notebooks:
papermill template.ipynb output.ipynb
Use PROACTIVELY for troubleshooting:
- Production log analysis with structured logging
- Distributed tracing with OpenTelemetry
- Async debugging with aiomonitor
- Memory leak detection with objgraph and tracemalloc
- WebSocket packet analysis and replay
- Order lifecycle failures
- Real-time data gaps
- Event deadlocks
Example scenarios:
- "Debug why orders aren't filling"
- "Fix WebSocket reconnection issues"
- "Trace event propagation problems"
- "Analyze production memory leaks"
Enhanced debugging:
- Async monitor:
aiomonitoron port 50101 - Memory analysis:
objgraph.show_growth() - Distributed tracing with OpenTelemetry
Use PROACTIVELY for code review:
- Security-focused review with semgrep
- Complexity analysis with radon
- Test coverage delta reporting
- Breaking change detection
- Performance benchmark comparison
- Reviewing async patterns
- Validating financial data integrity
- Ensuring API stability
Example scenarios:
- Before merging pull requests
- After completing features
- Before version releases
- Security audit reviews
Enhanced review tools:
- Complexity analysis:
radon cc src/ -s - Security patterns:
semgrep --config=auto src/ - Coverage delta:
diff-cover coverage.xml
Use PROACTIVELY for performance tuning:
- Memory profiling and optimization
- Async performance tuning
- Cache optimization strategies
- WebSocket message batching
- DataFrame operation optimization
- Benchmark management and comparison
- Resource utilization analysis
Example scenarios:
- "Optimize tick processing latency"
- "Reduce memory usage in orderbook"
- "Improve DataFrame aggregation performance"
- "Profile async event loop bottlenecks"
Use for end-to-end testing with market simulation:
- Mock market data generation
- Order lifecycle simulation
- WebSocket stress testing
- Multi-timeframe backtesting
- Paper trading validation
- Market replay testing
- Cross-component integration testing
Example scenarios:
- "Test order execution under volatile conditions"
- "Validate indicator calculations with real data"
- "Stress test WebSocket with 1000+ ticks/second"
- "Simulate market gaps and disconnections"
Use PROACTIVELY for security and compliance:
- API key security validation
- WebSocket authentication audit
- Data encryption verification
- PII handling compliance
- Dependency vulnerability scanning
- Secret scanning in codebase
- Input validation checks
- Rate limiting verification
Example scenarios:
- Before releases
- After adding authentication features
- When handling sensitive data
- Regular security audits
Use for release preparation and deployment:
- Semantic versioning validation
- Breaking change detection
- Migration script generation
- Release notes compilation
- PyPI deployment automation
- Git tag management
- Pre-release testing coordination
- Rollback procedure planning
Example scenarios:
- Preparing version releases
- Creating migration guides
- Automating deployment pipeline
- Managing release branches
Use for market data analysis and validation:
- Indicator accuracy testing against TA-Lib
- Market microstructure analysis
- Order flow pattern detection
- Statistical validation of calculations
- Backtest result analysis
- Performance attribution
- Volume profile analysis
- Data quality verification
Example scenarios:
- "Validate MACD implementation"
- "Analyze order flow imbalances"
- "Compare indicator outputs with TA-Lib"
- "Statistical validation of backtest results"
1. data-analyst: Analyze requirements and validate approach
2. python-developer: Implement the feature
3. integration-tester: Create comprehensive tests
4. code-standards-enforcer: Ensure compliance
5. performance-optimizer: Optimize if needed
6. code-documenter: Create documentation
7. code-reviewer: Final review
8. release-manager: Prepare for release
1. code-debugger: Investigate and identify root cause
2. integration-tester: Reproduce with test case
3. python-developer: Implement fix
4. code-standards-enforcer: Verify fix quality
5. code-reviewer: Review the fix
1. performance-optimizer: Profile and identify bottlenecks
2. code-refactor: Plan optimization strategy
3. python-developer: Implement optimizations
4. integration-tester: Verify performance improvements
5. code-reviewer: Review changes
1. security-auditor: Comprehensive security scan
2. code-debugger: Investigate vulnerabilities
3. python-developer: Implement fixes
4. code-standards-enforcer: Verify secure coding
5. integration-tester: Test security measures
1. code-standards-enforcer: Pre-release compliance check
2. security-auditor: Security validation
3. integration-tester: Full regression testing
4. performance-optimizer: Performance regression check
5. code-documenter: Update documentation
6. release-manager: Coordinate release
- Use agents concurrently when multiple tasks can be parallelized
- Be specific in task descriptions for agents
- Choose the right agent based on the task type, not just keywords
- Use PROACTIVELY - don't wait for user to request specific agents
- Combine agents for complex tasks using collaboration patterns
- Leverage specialized agents for their unique capabilities
- Follow patterns for common workflows to ensure comprehensive coverage
# Concurrent agent execution for new feature
1. Launch simultaneously:
- data-analyst: Validate market data requirements
- performance-optimizer: Baseline current performance
- security-auditor: Review security implications
2. python-developer: Implement based on analysis results
3. Launch simultaneously:
- integration-tester: Create test suite
- code-standards-enforcer: Check compliance
- code-documenter: Write documentation
4. code-reviewer: Final review before merge# Sequential debugging workflow
1. code-debugger: Analyze logs and identify issue
2. performance-optimizer: Check for performance degradation
3. python-developer: Implement fix
4. integration-tester: Verify fix with reproduction test
5. code-reviewer: Review and approve fixNote: Tool permissions are configured at the system level. This section documents common commands agents need.
All Agents:
./test.sh [script]- Run tests and examples with proper environment- File operations (Read, Write, Edit, MultiEdit)
git status,git diff,git add- Version control
python-developer:
uv run pytest- Run test suiteuv add [package]- Add dependencies./test.sh examples/*.py- Test example scripts
code-standards-enforcer:
mcp__ide__getDiagnostics- CHECK FIRST - IDE diagnosticsuv run ruff check .- Lint codeuv run ruff format .- Format codeuv run mypy src/- Type checkinguv run pytest --cov- Coverage reports
code-debugger:
./test.shwith debug scriptsgrepand search operations- Log analysis commands
code-reviewer:
git diff- Review changesuv run pytest- Verify tests pass- Static analysis tools
# Agent workflow for implementing a feature
1. python-developer:
- Edit src/project_x_py/new_feature.py
- ./test.sh tests/test_new_feature.py
2. code-standards-enforcer:
- mcp__ide__getDiagnostics # ALWAYS CHECK FIRST
- uv run ruff check src/
- uv run mypy src/
- Fix any issues found
3. code-reviewer:
- mcp__ide__getDiagnostics # Verify no issues remain
- git diff
- uv run pytest
- Review implementationCRITICAL: The code-standards-enforcer agent must ALWAYS:
- First check
mcp__ide__getDiagnosticsfor the modified files - Fix any IDE diagnostic errors/warnings before proceeding
- Then run traditional linting tools (ruff, mypy)
- Verify with IDE diagnostics again after fixes
This catches issues that mypy might miss, such as:
- Incorrect method names (e.g.,
get_statisticsvsget_position_stats) - Missing attributes on classes
- Type mismatches that IDE's type checker detects
- Real-time semantic errors
Note: MCP server access is system-configured. Agents should have access to relevant MCP servers for their tasks.
All Agents Should Access:
mcp__aakarsh-sasi-memory-bank-mcp- Track progress and contextmcp__mcp-obsidian- Document plans and decisionsmcp__smithery-ai-filesystem- File operations
python-developer:
mcp__project-x-py_Docs- Search project documentationmcp__upstash-context-7-mcp- Get library documentationmcp__waldzellai-clear-thought- Complex problem solvingmcp__itseasy-21-mcp-knowledge-graph- Map component relationships
code-standards-enforcer:
mcp__project-x-py_Docs- Verify against documentationmcp__aakarsh-sasi-memory-bank-mcp- Check architectural decisions
code-refactor:
mcp__waldzellai-clear-thought- Plan refactoring strategymcp__itseasy-21-mcp-knowledge-graph- Understand dependenciesmcp__aakarsh-sasi-memory-bank-mcp- Log refactoring decisions
code-documenter:
mcp__mcp-obsidian- Create documentationmcp__project-x-py_Docs- Reference existing docsmcp__tavily-mcp- Research external APIs
code-debugger:
mcp__waldzellai-clear-thought- Analyze issues systematicallymcp__itseasy-21-mcp-knowledge-graph- Trace data flowmcp__ide- Get diagnostics and errors
code-reviewer:
mcp__github- Review PRs and issuesmcp__project-x-py_Docs- Verify against standardsmcp__aakarsh-sasi-memory-bank-mcp- Check design decisions
# python-developer agent workflow
1. Search existing patterns:
await mcp__project_x_py_Docs__search_project_x_py_code(
query="async def place_order"
)
2. Track implementation:
await mcp__aakarsh_sasi_memory_bank_mcp__track_progress(
action="Implemented async order placement",
description="Added bracket order support"
)
3. Document in Obsidian:
await mcp__mcp_obsidian__obsidian_append_content(
filepath="Development/ProjectX SDK/Features/Order System.md",
content="## Bracket Order Implementation\n..."
)
# code-debugger agent workflow
1. Analyze problem:
await mcp__waldzellai_clear_thought__clear_thought(
operation="debugging_approach",
prompt="WebSocket disconnecting under load"
)
2. Check component relationships:
await mcp__itseasy_21_mcp_knowledge_graph__search_nodes(
query="WebSocket RealtimeClient"
)
3. Get IDE diagnostics:
await mcp__ide__getDiagnostics()- Memory Bank: Update after completing tasks
- Obsidian: Document multi-session plans and decisions
- Clear Thought: Use for complex analysis and planning
- Knowledge Graph: Maintain component relationships
- Project Docs: Reference before implementing
- GitHub: Check issues and PRs for context
ALWAYS use Obsidian MCP integration for:
- Multi-session development plans
- Testing procedures and results
- Architecture decisions and design documents
- Feature planning and roadmaps
- Bug investigation notes
- Performance optimization tracking
- Release planning and checklists
DO NOT create project files for:
- Personal development notes (use Obsidian instead)
- Temporary planning documents
- Testing logs and results
- Work-in-progress documentation
- Meeting notes or discussions
When using Obsidian for this project, use the following structure:
Development/
ProjectX SDK/
Feature Planning/
[Feature Name].md
Testing Plans/
[Version] Release Testing.md
Architecture Decisions/
[Decision Topic].md
Bug Investigations/
[Issue Number] - [Description].md
Performance/
[Optimization Area].md
# When creating multi-session plans:
await mcp__mcp_obsidian__obsidian_append_content(
filepath="Development/ProjectX SDK/Feature Planning/WebSocket Improvements.md",
content="# WebSocket Connection Improvements Plan\n..."
)
# When documenting test results:
await mcp__mcp_obsidian__obsidian_append_content(
filepath="Development/ProjectX SDK/Testing Plans/v3.3.0 Release Testing.md",
content="## Test Results\n..."
)This keeps the project repository clean and focused on production code while maintaining comprehensive development documentation in Obsidian.
uv add [package] # Add a dependency
uv add --dev [package] # Add a development dependency
uv sync # Install/sync dependencies
uv run [command] # Run command in virtual environmentuv run pytest # Run all tests
uv run pytest tests/test_client.py # Run specific test file
uv run pytest -m "not slow" # Run tests excluding slow ones
uv run pytest --cov=project_x_py --cov-report=html # Generate coverage report
uv run pytest -k "async" # Run only async tests# Test async methods with pytest-asyncio
import pytest
@pytest.mark.asyncio
async def test_async_method():
async with ProjectX.from_env() as client:
await client.authenticate()
result = await client.get_bars("MNQ", days=1)
assert result is not Noneuv run ruff check . # Lint code
uv run ruff check . --fix # Auto-fix linting issues
uv run ruff format . # Format code
uv run mypy src/ # Type checkinguv build # Build wheel and source distribution
uv run python -m build # Alternative build commandProjectX Client (src/project_x_py/client/)
- Main async API client for TopStepX ProjectX Gateway
- Modular architecture with specialized modules:
auth.py: Authentication and JWT token managementhttp.py: Async HTTP client with retry logiccache.py: Intelligent caching for instrumentsmarket_data.py: Market data operationstrading.py: Trading operationsrate_limiter.py: Async rate limitingbase.py: Base class combining all mixins
Specialized Managers (All Async)
OrderManager(order_manager/): Comprehensive async order operationscore.py: Main order operationsbracket_orders.py: OCO and bracket order logicposition_orders.py: Position-based order managementtracking.py: Order state trackingtemplates.py: Order templates for common strategies
PositionManager(position_manager/): Async position tracking and risk managementcore.py: Position management corerisk.py: Risk calculations and limitsanalytics.py: Performance analyticsmonitoring.py: Real-time position monitoringtracking.py: Position lifecycle tracking
RiskManager(risk_manager/): Integrated risk managementcore.py: Risk limits and validationmonitoring.py: Real-time risk monitoringanalytics.py: Risk metrics and reporting
ProjectXRealtimeDataManager(realtime_data_manager/): Async WebSocket datacore.py: Main data managercallbacks.py: Event callback handlingdata_processing.py: OHLCV bar constructionmemory_management.py: Efficient data storage
OrderBook(orderbook/): Async Level 2 market depthbase.py: Core orderbook functionalityanalytics.py: Market microstructure analysisdetection.py: Iceberg and spoofing detectionprofile.py: Volume profile analysis
Technical Indicators (src/project_x_py/indicators/)
- TA-Lib compatible indicator library built on Polars
- 58+ indicators including pattern recognition:
- Momentum: RSI, MACD, Stochastic, etc.
- Overlap: SMA, EMA, Bollinger Bands, etc.
- Volatility: ATR, Keltner Channels, etc.
- Volume: OBV, VWAP, Money Flow, etc.
- Pattern Recognition (NEW):
- Fair Value Gap (FVG): Price imbalance detection
- Order Block: Institutional order zone identification
- Waddah Attar Explosion: Volatility-based trend strength
- All indicators work with Polars DataFrames for performance
Configuration System
- Environment variable based configuration
- JSON config file support (
~/.config/projectx/config.json) - ProjectXConfig dataclass for type safety
- ConfigManager for centralized configuration handling
Event System
- Unified EventBus for cross-component communication
- Type-safe event definitions
- Async event handlers with priority support
- Built-in event types for all trading events
The Features enum defines optional components that can be enabled:
ORDERBOOK = "orderbook"- Level 2 market depth and analysisRISK_MANAGER = "risk_manager"- Position sizing and risk managementTRADE_JOURNAL = "trade_journal"- Trade logging (future)PERFORMANCE_ANALYTICS = "performance_analytics"- Advanced metrics (future)AUTO_RECONNECT = "auto_reconnect"- Automatic reconnection (future)
Note: OrderManager and PositionManager are always included by default.
Async Factory Functions: Use async create_* functions for component initialization:
# TradingSuite - Recommended approach (v3.0.0+)
async def setup_trading():
# Simple one-line setup with TradingSuite
suite = await TradingSuite.create(
"MNQ",
timeframes=["1min", "5min"],
features=["orderbook"]
)
# Everything is ready - client authenticated, realtime connected
return suiteDependency Injection: Managers receive their dependencies (ProjectX client, realtime client) rather than creating them.
Real-time Integration: Single ProjectXRealtimeClient instance shared across managers for WebSocket connection efficiency.
Context Managers: Always use async context managers for proper resource cleanup:
async with ProjectX.from_env() as client:
# Client automatically handles auth, cleanup
pass- Authentication: ProjectX client authenticates and provides JWT tokens
- Real-time Setup: Create ProjectXRealtimeClient with JWT for WebSocket connections
- Manager Initialization: Pass clients to specialized managers via dependency injection
- Data Processing: Polars DataFrames used throughout for performance
- Event Handling: Real-time updates flow through WebSocket to respective managers
- All indicators follow TA-Lib naming conventions (uppercase function names allowed in
indicators/__init__.py) - Use Polars pipe() method for chaining:
data.pipe(SMA, period=20).pipe(RSI, period=14) - Indicators support both class instantiation and direct function calls
- All price handling uses Decimal for precision
- Automatic tick size alignment in OrderManager
- Price formatting utilities in utils.py
- Custom exception hierarchy in exceptions.py
- All API errors wrapped in ProjectX-specific exceptions
- Comprehensive error context and retry logic
- Pytest with async support and mocking
- Test markers: unit, integration, slow, realtime
- High test coverage required (configured in pyproject.toml)
- Mock external API calls in unit tests
Required environment variables:
PROJECT_X_API_KEY: TopStepX API keyPROJECT_X_USERNAME: TopStepX username
Optional configuration:
PROJECTX_API_URL: Custom API endpointPROJECTX_TIMEOUT_SECONDS: Request timeoutPROJECTX_RETRY_ATTEMPTS: Retry attempts
Several MCP (Model Context Protocol) servers are available to enhance development workflow:
Tracks development progress and maintains context across sessions:
# Track feature implementation progress
await mcp__aakarsh_sasi_memory_bank_mcp__track_progress(
action="Implemented bracket order system",
description="Added OCO and bracket order support with automatic stop/target placement"
)
# Log architectural decisions
await mcp__aakarsh_sasi_memory_bank_mcp__log_decision(
title="Event System Architecture",
context="Need unified event handling across components",
decision="Implement EventBus with async handlers and priority support",
alternatives=["Direct callbacks", "Observer pattern", "Pub/sub with Redis"],
consequences=["Better decoupling", "Easier testing", "Slight performance overhead"]
)
# Switch development modes
await mcp__aakarsh_sasi_memory_bank_mcp__switch_mode("debug") # architect, code, debug, testMaps component relationships and data flow:
# Map trading system relationships
await mcp__itseasy_21_mcp_knowledge_graph__create_entities(
entities=[
{"name": "TradingSuite", "entityType": "Core",
"observations": ["Central orchestrator", "Manages all components"]},
{"name": "OrderManager", "entityType": "Manager",
"observations": ["Handles order lifecycle", "Supports bracket orders"]}
]
)
await mcp__itseasy_21_mcp_knowledge_graph__create_relations(
relations=[
{"from": "TradingSuite", "to": "OrderManager", "relationType": "manages"},
{"from": "OrderManager", "to": "ProjectXClient", "relationType": "uses"}
]
)For complex problem-solving and architecture decisions:
# Analyze performance bottlenecks
await mcp__waldzellai_clear_thought__clear_thought(
operation="debugging_approach",
prompt="WebSocket connection dropping under high message volume",
context="Real-time data manager processing 1000+ ticks/second"
)
# Plan refactoring strategy
await mcp__waldzellai_clear_thought__clear_thought(
operation="systems_thinking",
prompt="Refactor monolithic client into modular mixins",
context="Need better separation of concerns without breaking existing API"
)Quick access to project-specific documentation:
# Search project documentation
await mcp__project_x_py_Docs__search_project_x_py_documentation(
query="bracket order implementation"
)
# Search codebase
await mcp__project_x_py_Docs__search_project_x_py_code(
query="async def place_bracket_order"
)Research trading APIs and async patterns:
# Search for solutions
await mcp__tavily_mcp__tavily_search(
query="python asyncio websocket reconnection pattern futures trading",
max_results=5,
search_depth="advanced"
)
# Extract documentation
await mcp__tavily_mcp__tavily_extract(
urls=["https://docs.python.org/3/library/asyncio-task.html"],
format="markdown"
)- Memory Bank: Update after completing significant features or making architectural decisions
- Knowledge Graph: Maintain when adding new components or changing relationships
- Clear Thought: Use for complex debugging, performance analysis, or architecture planning
- Documentation MCPs: Reference before implementing new features to understand existing patterns
- Starting a new feature: Check Memory Bank for context, use Clear Thought for planning
- Debugging complex issues: Clear Thought for analysis, Knowledge Graph for understanding relationships
- Making architectural decisions: Log with Memory Bank, analyze with Clear Thought
- Understanding existing code: Project Docs for internal code, Tavily for external research
- Tracking progress: Memory Bank for TODO tracking and progress updates
- HTTP connection pooling with retry strategies for 50-70% fewer connection overhead
- Instrument caching reduces repeated API calls by 80%
- Preemptive JWT token refresh at 80% lifetime prevents authentication delays
- Session-based requests with automatic retry on failures
- OrderBook: Sliding windows with configurable limits (max 10K trades, 1K depth entries)
- RealtimeDataManager: Automatic cleanup maintains 1K bars per timeframe
- Indicators: LRU cache for repeated calculations (100 entry limit)
- Periodic garbage collection after large data operations
- Chained operations reduce intermediate DataFrame creation by 30-40%
- Lazy evaluation with Polars for better memory efficiency
- Efficient datetime parsing with cached timezone objects
- Vectorized operations in orderbook analysis
Use async built-in methods to monitor performance:
# Client performance stats (async)
async with ProjectX.from_env() as client:
await client.authenticate()
# Check performance metrics
stats = await client.get_performance_stats()
print(f"API calls: {stats['api_calls']}")
print(f"Cache hits: {stats['cache_hits']}")
# Health check
health = await client.get_health_status()
# Memory usage monitoring
orderbook_stats = await orderbook.get_memory_stats()
data_manager_stats = await data_manager.get_memory_stats()- 50-70% reduction in API calls through intelligent caching
- 30-40% faster indicator calculations via chained operations
- 60% less memory usage through sliding windows and cleanup
- Sub-second response times for cached operations
- 95% reduction in polling with real-time WebSocket feeds
max_trades = 10000(OrderBook trade history)max_depth_entries = 1000(OrderBook depth per side)max_bars_per_timeframe = 1000(Real-time data per timeframe)tick_buffer_size = 1000(Tick data buffer)cache_max_size = 100(Indicator cache entries)
- Breaking: Complete statistics system redesign with 100% async-first architecture
- Added: New statistics module with BaseStatisticsTracker, ComponentCollector, StatisticsAggregator
- Added: Multi-format export (JSON, Prometheus, CSV, Datadog) with data sanitization
- Added: Enhanced health monitoring with 0-100 scoring and configurable thresholds
- Added: TTL caching, parallel collection, and circular buffers for performance optimization
- Added: 45+ new tests covering all aspects of the async statistics system
- Fixed: Eliminated all statistics-related deadlocks with single RW lock per component
- Changed: All statistics methods now require
awaitfor consistency and performance - Removed: Legacy statistics mixins (EnhancedStatsTrackingMixin, StatsTrackingMixin)