-
Notifications
You must be signed in to change notification settings - Fork 38
Resolves #100: pdd Setup should use llm_invoke and give access to all models #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
qanagattandyr
wants to merge
3
commits into
promptdriven:main
Choose a base branch
from
qanagattandyr:fix/use-llm-invoke
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -8,11 +8,11 @@ | |||||
| import sys | ||||||
| import subprocess | ||||||
| import json | ||||||
| import requests | ||||||
| import csv | ||||||
| import importlib.resources | ||||||
| from pathlib import Path | ||||||
| from typing import Dict, Optional, Tuple, List | ||||||
| from pdd.llm_invoke import llm_invoke | ||||||
|
|
||||||
| # Global variables for non-ASCII characters and colors | ||||||
| HEAVY_HORIZONTAL = "━" | ||||||
|
|
@@ -101,114 +101,81 @@ def print_pdd_logo(): | |||||
| print() | ||||||
| print_colored("Let's get set up quickly with a solid basic configuration!", WHITE, bold=True) | ||||||
| print() | ||||||
| print_colored("Supported: OpenAI, Google Gemini, and Anthropic Claude", WHITE) | ||||||
| print_colored("from their respective API endpoints (no third-parties, such as Azure)", WHITE) | ||||||
| print_colored("Supports all major LLM providers including:", WHITE) | ||||||
| print_colored("OpenAI, Google Gemini, Anthropic Claude, Fireworks, Groq, Vertex AI, and more", WHITE) | ||||||
| print() | ||||||
|
|
||||||
| def get_csv_variable_names() -> Dict[str, str]: | ||||||
| """Inspect packaged CSV to determine API key variable names per provider. | ||||||
| """Inspect packaged CSV to determine all unique API key variable names. | ||||||
|
|
||||||
| Focus on direct providers only: OpenAI GPT models (model startswith 'gpt-'), | ||||||
| Google Gemini (model startswith 'gemini/'), and Anthropic (model startswith 'anthropic/'). | ||||||
| Returns a dictionary mapping each unique API key variable name to itself. | ||||||
| This allows discovery of all providers configured in the CSV. | ||||||
| """ | ||||||
| header, rows = _read_packaged_llm_model_csv() | ||||||
| variable_names: Dict[str, str] = {} | ||||||
|
|
||||||
| for row in rows: | ||||||
| model = (row.get('model') or '').strip() | ||||||
| api_key = (row.get('api_key') or '').strip() | ||||||
| provider = (row.get('provider') or '').strip().upper() | ||||||
| if api_key and api_key not in variable_names: | ||||||
| # Map the API key name to itself | ||||||
| variable_names[api_key] = api_key | ||||||
|
|
||||||
| if not api_key: | ||||||
| continue | ||||||
|
|
||||||
| if model.startswith('gpt-') and provider == 'OPENAI': | ||||||
| variable_names['OPENAI'] = api_key | ||||||
| elif model.startswith('gemini/') and provider == 'GOOGLE': | ||||||
| # Prefer direct Gemini key, not Vertex | ||||||
| variable_names['GOOGLE'] = api_key | ||||||
| elif model.startswith('anthropic/') and provider == 'ANTHROPIC': | ||||||
| variable_names['ANTHROPIC'] = api_key | ||||||
|
|
||||||
| # Fallbacks if not detected (keep prior behavior) | ||||||
| variable_names.setdefault('OPENAI', 'OPENAI_API_KEY') | ||||||
| # Prefer GEMINI_API_KEY name for Google if present | ||||||
| variable_names.setdefault('GOOGLE', 'GEMINI_API_KEY') | ||||||
| variable_names.setdefault('ANTHROPIC', 'ANTHROPIC_API_KEY') | ||||||
| return variable_names | ||||||
|
|
||||||
| def discover_api_keys() -> Dict[str, Optional[str]]: | ||||||
| """Discover API keys from environment variables""" | ||||||
| # Get the variable names actually used in CSV template | ||||||
| # Get all the variable names actually used in CSV template | ||||||
| csv_vars = get_csv_variable_names() | ||||||
|
|
||||||
| keys = { | ||||||
| 'OPENAI_API_KEY': os.getenv('OPENAI_API_KEY'), | ||||||
| 'ANTHROPIC_API_KEY': os.getenv('ANTHROPIC_API_KEY'), | ||||||
| } | ||||||
|
|
||||||
| # For Google, check both possible environment variables but use CSV template's variable name | ||||||
| google_var_name = csv_vars.get('GOOGLE', 'GEMINI_API_KEY') # Default to GEMINI_API_KEY | ||||||
| google_api_key = os.getenv('GEMINI_API_KEY') or os.getenv('GOOGLE_API_KEY') | ||||||
| keys[google_var_name] = google_api_key | ||||||
| keys = {} | ||||||
| for api_key_name in csv_vars.values(): | ||||||
| keys[api_key_name] = os.getenv(api_key_name) | ||||||
|
|
||||||
| return keys | ||||||
|
|
||||||
| def test_openai_key(api_key: str) -> bool: | ||||||
| """Test OpenAI API key validity""" | ||||||
| if not api_key or not api_key.strip(): | ||||||
| return False | ||||||
| def test_api_key_with_llm_invoke(api_key_name: str, api_key_value: str) -> bool: | ||||||
| """Test an API key by attempting to invoke llm with a simple prompt. | ||||||
|
|
||||||
| try: | ||||||
| headers = { | ||||||
| 'Authorization': f'Bearer {api_key.strip()}', | ||||||
| 'Content-Type': 'application/json' | ||||||
| } | ||||||
| response = requests.get( | ||||||
| 'https://api.openai.com/v1/models', | ||||||
| headers=headers, | ||||||
| timeout=10 | ||||||
| ) | ||||||
| return response.status_code == 200 | ||||||
| except Exception: | ||||||
| return False | ||||||
|
|
||||||
| def test_google_key(api_key: str) -> bool: | ||||||
| """Test Google Gemini API key validity""" | ||||||
| if not api_key or not api_key.strip(): | ||||||
| return False | ||||||
|
|
||||||
| try: | ||||||
| response = requests.get( | ||||||
| f'https://generativelanguage.googleapis.com/v1beta/models?key={api_key.strip()}', | ||||||
| timeout=10 | ||||||
| ) | ||||||
| return response.status_code == 200 | ||||||
| except Exception: | ||||||
| return False | ||||||
|
|
||||||
| def test_anthropic_key(api_key: str) -> bool: | ||||||
| """Test Anthropic API key validity""" | ||||||
| if not api_key or not api_key.strip(): | ||||||
| Args: | ||||||
| api_key_name: The environment variable name for the API key (e.g., 'OPENAI_API_KEY') | ||||||
| api_key_value: The actual API key value to test | ||||||
|
|
||||||
| Returns: | ||||||
| True if the API key works, False otherwise | ||||||
| """ | ||||||
| if not api_key_value or not api_key_value.strip(): | ||||||
| return False | ||||||
|
|
||||||
| # Temporarily set the API key in the environment | ||||||
| old_value = os.environ.get(api_key_name) | ||||||
| try: | ||||||
| headers = { | ||||||
| 'x-api-key': api_key.strip(), | ||||||
| 'Content-Type': 'application/json' | ||||||
| } | ||||||
| response = requests.get( | ||||||
| 'https://api.anthropic.com/v1/messages', | ||||||
| headers=headers, | ||||||
| timeout=10 | ||||||
| os.environ[api_key_name] = api_key_value.strip() | ||||||
|
|
||||||
| # Try to invoke llm with a simple prompt | ||||||
| # Use a very simple prompt and low cost settings | ||||||
| response = llm_invoke( | ||||||
| prompt="Say hello", | ||||||
| input_json={}, | ||||||
| strength=0.0, # Use cheapest model | ||||||
| temperature=0.1, | ||||||
| verbose=False | ||||||
| ) | ||||||
| # Anthropic returns 400 for invalid request structure but 401/403 for bad keys | ||||||
| return response.status_code != 401 and response.status_code != 403 | ||||||
|
|
||||||
| # If we get here without exception and have a result, the key works | ||||||
| return response is not None and 'result' in response | ||||||
|
||||||
| return response is not None and 'result' in response | |
| return 'result' in response |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment 'Use cheapest model' is misleading. The
strengthparameter represents model capability/power, not cost. A value of 0.0 selects the weakest/least capable model, which typically costs less but may not always be the absolute cheapest option. Consider clarifying:strength=0.0, # Use least capable model (typically cheapest)