The climate_llm_benchmark.py script now supports enhanced command line arguments while maintaining full backward compatibility.
# Default configuration
python climate_llm_benchmark.py
# Custom configuration file
python climate_llm_benchmark.py --config=my_config.yaml# Process specific chunks with new arguments
python climate_llm_benchmark.py --chunk=1
python climate_llm_benchmark.py --chunk=15
python climate_llm_benchmark.py --config=my_config.yaml --chunk=5# Override Ollama base URL
python climate_llm_benchmark.py --base_url=http://localhost:11434
# Override OpenAI base URL (for local OpenAI-compatible servers)
python climate_llm_benchmark.py --base_url=http://localhost:8080
# Combined: custom config, chunk processing, and base URL override
python climate_llm_benchmark.py --config=ollama_config.yaml --chunk=3 --base_url=http://192.168.1.100:11434# Process chunk 5 with custom Ollama server
python climate_llm_benchmark.py --chunk=5 --base_url=http://remote-server:11434
# Use different config and chunk with OpenAI-compatible server
python climate_llm_benchmark.py --config=openai_config.yaml --chunk=10 --base_url=http://localhost:8080All existing usage patterns continue to work:
# Legacy positional arguments still work
python climate_llm_benchmark.py config.yaml
python climate_llm_benchmark.py 5 # chunk 5 with default config
python climate_llm_benchmark.py config.yaml 5 # chunk 5 with custom configpython climate_llm_benchmark.py --helpOutput:
usage: climate_llm_benchmark.py [-h] [--config CONFIG] [--chunk CHUNK]
[--base_url BASE_URL]
[legacy_args ...]
Climate LLM Benchmark
positional arguments:
legacy_args Legacy: [config_file] [chunk_number]
options:
-h, --help show this help message and exit
--config CONFIG Configuration file path (default: config.yaml)
--chunk CHUNK Chunk number to process (enables chunk mode)
--base_url BASE_URL Override base URL for model provider (e.g.,
http://localhost:11434)
The --base_url argument overrides the configuration for these providers:
Ollama:
# config.yaml - will be overridden by --base_url
providers:
ollama:
base_url: "http://localhost:11434" # ← overriddenOpenAI (for compatible servers):
# config.yaml - will be overridden by --base_url
providers:
openai:
base_url: null # ← overriddenAnthropic:
# config.yaml - will be overridden by --base_url
providers:
anthropic:
base_url: null # ← overriddenWhen --chunk is specified, chunk mode is automatically enabled regardless of config:
# config.yaml
benchmark:
chunk_mode: false # ← ignored when --chunk is used# Split work across multiple Ollama servers
python climate_llm_benchmark.py --chunk=1 --base_url=http://server1:11434 &
python climate_llm_benchmark.py --chunk=2 --base_url=http://server2:11434 &
python climate_llm_benchmark.py --chunk=3 --base_url=http://server3:11434 &
wait# Use with local OpenAI-compatible servers (e.g., vLLM, Text Generation WebUI)
python climate_llm_benchmark.py --config=local_openai.yaml --chunk=1 --base_url=http://localhost:8080# Test different models/configs on same data chunks
python climate_llm_benchmark.py --config=gpt4_config.yaml --chunk=1 &
python climate_llm_benchmark.py --config=claude_config.yaml --chunk=1 &
python climate_llm_benchmark.py --config=ollama_config.yaml --chunk=1 --base_url=http://localhost:11434 &
wait#!/bin/bash
# Process all chunks with custom Ollama server
for i in {1..20}; do
echo "Starting chunk $i..."
python climate_llm_benchmark.py --chunk=$i --base_url=http://192.168.1.100:11434 &
done
wait
echo "All chunks completed!"When using overrides, the script shows the applied settings:
Climate LLM Benchmark
Configuration: config.yaml
Base URL override: http://localhost:8080 ← Shows override
Chunk mode: Enabled
Processing chunk: 5
Mesh file: meshes/chunks/mesh_data_1.0deg_chunk_05_of_20.json
Provider: openai
Model: gpt-5-nano
...
Old:
python climate_llm_benchmark.py config.yaml 5New (equivalent):
python climate_llm_benchmark.py --config=config.yaml --chunk=5Benefits of New Format:
- More explicit and self-documenting
- Support for base URL overrides
- Better help documentation
- Easier to extend with additional options
For new scripts and workflows, use the argument-based format:
python climate_llm_benchmark.py --chunk=5 --base_url=http://localhost:11434For existing scripts, no changes needed - legacy format continues to work.
Enhanced error messages show both new and legacy usage:
Error: Chunk mode is enabled but no chunk number specified.
Usage: python climate_llm_benchmark.py --chunk=N
or: python climate_llm_benchmark.py --config=config.yaml --chunk=N
Legacy: python climate_llm_benchmark.py [config_file] chunk_number
This enhanced CLI provides better usability while maintaining complete backward compatibility for existing workflows.