You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a comprehensive Bash script for interacting with Ollama models, incorporating a "toolshed" concept for extending functionality. It's well-structured and demonstrates good effort in creating a rich command-line interface.
Here's a detailed code review, breaking down strengths, areas for improvement, and potential issues:
Overall Impressions
Strengths:
Ambitious and Feature-Rich: The script attempts to provide a full-fledged chat experience with Ollama, including tool integration, configuration management, and various helpful commands.
Modular Design: The use of functions for different functionalities (e.g., getHelp, addMessage, sendRequest, processUserCommand) makes the code more organized and readable.
Clear Command Structure: The / prefix for internal commands and ! for shell commands is a good design choice for usability.
Tool Integration: The "toolshed" concept is a powerful addition, allowing the AI to use external scripts, which significantly extends its capabilities.
Error Handling: Basic error checking is present, especially for API calls and missing requirements.
Informative Prompts: The system prompts are well-crafted to guide the AI's behavior and inform it about its environment and available tools/commands.
User Feedback: Provides good feedback to the user during operations (e.g., "Waiting for model response...", tool call/result messages).
Areas for Improvement:
Robustness and Error Handling: While some error handling exists, critical operations (like JSON parsing, file operations, curl calls) could benefit from more rigorous error checking and informative error messages.
Security: Executing arbitrary shell commands via ! and directly running run.sh scripts from TOOLS_DIRECTORY poses significant security risks if the script or its tools are used in an untrusted environment or with untrusted input.
Dependency Management: The checkRequirements function is a good start, but the script could consider more graceful handling of missing dependencies beyond just printing an error.
JSON Parsing: While jq is used, some JSON parsing relies on direct string manipulation which can be fragile.
Bash Best Practices: Some areas could be optimized for better Bash practices, like quoting variables consistently.
User Experience (Minor): Some output can be a bit noisy, and clearing the screen or providing more visual cues could enhance the experience.
Detailed Review
Header and Metadata
Good: Clear shebang, comments explaining purpose and usage, defined NAME, VERSION, and URL.
Global Variables
DEBUG_MODE: Good for debugging.
TOOLS_DIRECTORY: Good, but consider making it configurable via environment variable or command-line argument for more flexibility.
model, messages, messageCount, etc.: Well-defined for managing state.
configs array: Excellent way to manage configurable parameters.
getHelp Function
Good: Provides a comprehensive list of commands using a cat << EOF block, which is clean.
System Prompt Functions (getSystemPromptIdentity, getSystemPromptTools, getSystemPromptTerminal, getSystemPromptUserCommands, getSystemPrompt)
Excellent: These functions demonstrate a sophisticated approach to building a dynamic system prompt for the AI. This is crucial for guiding the model's behavior and making it aware of its environment and capabilities.
getSystemPromptTools: Dynamically includes tool instructions, which is very effective.
getSystemPromptTerminal: Informs the AI about terminal capabilities (ANSI escape codes), enabling it to produce colored output.
getSystemPromptUserCommands: Makes the AI aware of the commands the user can issue, potentially allowing it to guide the user or respond appropriately to command-related queries.
Color Scheme and Debugging Functions
setColorScheme: Good for visual feedback.
debug, debugJson: Essential for development and troubleshooting.
Core Logic Functions
getTools:
Good: Iterates through TOOLS_DIRECTORY to discover tools and their definitions/instructions.
Improvement: The TODO - redo systemPrompt with new tool setting in setConfigs is important. The system prompt should be regenerated and possibly re-added to messages when tool settings change.
Robustness: Consider adding error handling if definition.json or instructions.txt are malformed.
getConfig, setConfigs, setConfig:
Good: Robust implementation for managing configurations.
Minor Issue: In setConfig, value=${commandArray[2]} seems to assign from commandArray directly, which might be a leftover or a slight confusion. The configValue parameter passed to setConfig should be used.
Original:
value=${commandArray[2]} # This is problematic if setConfig is called directly with a value
Corrected:
value="$configValue"
eval ${name}Config=$value: While functional, eval can be risky if name or value come from untrusted input. For internal configuration variables, it's generally acceptable.
safeJson:
Good: Uses jq -Rn --arg str "$1" '$str' to safely escape strings for JSON, preventing injection issues.
addMessage, addMessageAssistantToolCall:
Good: Essential for building the messages JSON array, which is sent to Ollama.
Clarity: addMessageAssistantToolCall directly takes the response and extracts tool_calls. This is efficient.
createRequest:
Good: Dynamically builds the JSON request payload for the Ollama API, including think and tools based on configuration.
Clarity: Uses echo with newlines to build the JSON, which is readable for simple cases.
sendRequestToAPI:
Good: Uses curl to interact with the Ollama API. Checks apiConfig.
Robustness: curl -s suppresses output. Consider adding -f to curl to make it fail silently on HTTP errors (e.g., 4xx or 5xx) and check $? to provide more specific error messages if the API call itself fails, not just if the JSON response contains an error field.
Error Message: The ERROR: API URL is not set. message is helpful.
sendRequest:
Good: Wraps sendRequestToAPI and provides user feedback ("⏳ Waiting...").
echo -e "$ERASE_LINE": Clears the "Waiting" message, nice touch for UX.
clear:
Good: Clears internal messages.
expect for ollama run $model: This is a clever but somewhat brittle way to clear the model's session context directly through ollama run.
Fragility: Relying on expect ">>> " and expect "Cleared session context" can break if Ollama's output changes.
Dependency: Adds expect as a hard requirement.
Alternative: Ollama might have an API endpoint for clearing context or a direct command for it in newer versions, which would be more robust. If not, this is a reasonable workaround.
/dev/null 2>&1: Good for suppressing the expect noise.
processErrors:
Good: Checks for a .error field in the JSON response.
processToolCall:
Critical Function: This is the core of the tool execution logic.
Logic: Recursively calls processToolCall if multiple tool calls are returned, or if a tool call triggers another one. This is correct for handling chained tool calls.
Tool Execution:
toolFile="$TOOLS_DIRECTORY/${function_name}/run.sh ${function_arguments}" and $($toolFile): Significant Security Risk. This directly executes an arbitrary script (run.sh) found in a tool directory with arguments provided by the AI model. If the AI could be prompted to inject malicious arguments or if a malicious run.sh script were placed in TOOLS_DIRECTORY, this could lead to arbitrary code execution on the host system.
Mitigation:
Input Validation/Sanitization: Thoroughly validate and sanitize function_name and function_arguments from the model's response before constructing and executing toolFile. This is very difficult to do perfectly for arbitrary shell commands.
Strict Whitelisting: Instead of just checking availableTools, explicitly define how each tool's run.sh script should be called and what arguments it expects. A case statement or a lookup table could map function_name to a safe execution command.
Jail/Sandbox: Run tools in a more restricted environment (e.g., Docker containers, chroot, firejail) to limit their access to the host system.
Least Privilege: Ensure run.sh scripts run with minimal necessary permissions.
Feedback: Prints "[TOOL] call" and "[TOOL] result", which is very helpful for debugging and user understanding.
userRunTool:
Good: Allows users to manually run tools via /exec.
Parameter Parsing: The regex ([[ $parameters =~ ([^[:space:]=]+)=(("[^\"]*")|([^[:space:]]+)) ]] is a decent attempt at parsing param="value" or param=value pairs.
Robustness: This regex might be brittle for complex values (e.g., values containing spaces without quotes, or special characters). Consider using getopts or a more robust parsing library if parameters become more complex.
Security: Same security concerns as processToolCall apply here, as it directly executes run.sh with user-provided parameters.
processUserCommand:
Excellent: Centralizes command parsing and execution.
! command: Directly uses eval "${prompt#*!}". This is another Significant Security Risk as it allows the user to execute any shell command on the system. If this script is ever used in a multi-user environment or exposed in any way, this is a major vulnerability.
Recommendation: Remove the ! command entirely, or implement a strict whitelist of allowed shell commands, carefully sanitizing arguments.
continue: Many case branches use continue. This would typically only make sense within a loop. In this function, it effectively acts like a return 0 (command processed) because the function's scope ends. It's not harmful but slightly unconventional for a function.
ollama commands: Directly calls ollama list, ollama ps, ollama pull, ollama show, ollama stop. This is good for integrating with Ollama's CLI.
showThinking:
Good: Provides feedback when the model is "thinking," using jq to extract the thinking field.
chat:
Main Loop Logic: Orchestrates the entire chat process: processes user commands, adds messages, sends requests, handles errors, processes tool calls, and displays the response.
Clarity: The sequence of calls within chat is logical and easy to follow.
checkRequirements:
Good: Checks for necessary external commands.
Improvement: For critical requirements, consider exiting with an error or providing clearer instructions on how to install them. Simply printing "Requirement ERROR" might not be sufficient for a user.
Main Script Execution
Initialization: setConfigs, setColorScheme, checkRequirements are called appropriately at startup.
Splash Screen: The ASCII art logo is a nice touch.
Initial Messages: addMessage "system" "$(getSystemPrompt)" is correctly called to set up the model's initial context.
Main while true loop:
Good: Continuously prompts the user.
Prompt Customization: The prompt >>> is dynamically updated with model name, tool count, request count, and message count, providing excellent context to the user.
Arbitrary Shell Command Execution (! command): The line eval "${prompt#*!}" in processUserCommand allows a user to execute any shell command by prefixing it with !. This is a severe security vulnerability. Remove this feature unless you have extremely strict controls in place and fully understand the implications. If this script is ever used on a server or shared machine, it's an immediate root compromise vector if the script runs as a privileged user.
AI-Driven Tool Execution (processToolCall): The line toolFile="$TOOLS_DIRECTORY/${function_name}/run.sh ${function_arguments}" followed by $($toolFile) is also a significant risk. The function_name and function_arguments come directly from the AI model's response. A malicious or confused AI could potentially craft these to execute unintended commands or pass dangerous arguments to legitimate tools.
Example: If a tool named delete existed, and the AI was prompted to call it with function_arguments like "../../../important_files", it could be catastrophic.
Mitigation: This is complex. As mentioned, rigorous input validation, whitelisting, and sandboxing are crucial. For a personal script, the risk might be acceptable, but for anything public or shared, it's a major concern.
User-Driven Tool Execution (userRunTool): Similar to the AI-driven execution, allowing users to manually run tools with arbitrary parameters via /exec also presents a risk.
Recommendations for Improvement
Address Security Vulnerabilities First:
Remove the ! command: This is the easiest and most impactful security fix.
Secure Tool Execution:
For processToolCall and userRunTool, reconsider how run.sh scripts are invoked. Instead of direct execution, you could have a central dispatcher function that takes function_name and a sanitized array of arguments. This dispatcher would then explicitly call predefined functions for each tool, passing only the validated arguments.
If direct run.sh execution is desired, thoroughly document the security risks and make it clear that tools must be trusted and arguments carefully considered. Implement strong input validation within run.sh scripts themselves.
Refine setConfig: Correct the value=${commandArray[2]} line to use $configValue for clarity and correctness when the function is called directly.
Error Handling for External Commands:
Add set -e at the top of the script to exit immediately if a command exits with a non-zero status. Be mindful of where this might be too aggressive (e.g., ollama commands that might legitimately fail and you want to catch).
For curl and jq calls, check their exit status ($?) to provide more specific error messages if they fail (e.g., if ! command -v jq &>/dev/null; then ...).
JSON Parsing Robustness: While jq is used, ensure all JSON extraction is done via jq to prevent issues with unexpected JSON structures. Avoid manual string parsing of JSON when jq can do it.
Configuration Persistence: The current configuration (configs array) is reset every time the script runs. Consider saving/loading configurations to/from a file (e.g., ~/.ollama-bash-toolshed-config) using source or jq.
"No Model Loaded" Handling: The script proceeds even if no model is loaded, only printing an error later. It might be better to block most operations or prompt the user to load a model immediately if one isn't provided as a command-line argument.
getTools System Prompt Update: Implement the TODO for regenerating the system prompt when toolsConfig changes. This is important so the AI is aware of its current tool access.
Bash Best Practices:
Quote Variables: Consistently quote variables, especially when they might contain spaces or special characters (e.g., "$model", "${commandArray[1]}"). While Bash often handles this implicitly in certain contexts, explicit quoting is safer.
Use local: Continue using local for all function variables to prevent unintended global variable modifications.
Arithmetic: Use ((var++)) for incrementing integers, which is more efficient than var=$((var + 1)). You've done this in many places, which is good.
Example of a minor fix (for setConfig):
setConfig() {
local configName="$1"
local configValue="$2" # This correctly receives the value
local newConfigs=()
for config in "${configs[@]}"; do
name="${config%%:}"
value="${config#:}"
if [[ "$name" == "$configName" ]]; then
value="$configValue" # Use the passed value directly
echo "$name: $value"
fi
Conclusion
This is an impressive and ambitious Bash script that pushes the boundaries of what can be done with shell scripting for AI interaction. The "toolshed" concept is particularly powerful. However, the critical security vulnerabilities related to arbitrary command execution (especially the ! command and AI-driven tool execution) must be addressed immediately if this script is to be used in any shared or untrusted environment. Once those are mitigated, further refinements in error handling and Bash best practices will make it even more robust and user-friendly.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Thisis a comprehensive Bash script for interacting with Ollama models, incorporating a "toolshed" concept for extending functionality. It's well-structured and demonstrates good effort in creating a rich command-line interface.Here's a detailed code review, breaking down strengths, areas for improvement, and potential issues:
Overall Impressions
Strengths:
Areas for Improvement:
Detailed Review
Original:
value=${commandArray[2]} # This is problematic if setConfig is called directly with a value
Corrected:
value="$configValue"
Security Considerations (Most Critical Points)
Recommendations for Improvement
Example of a minor fix (for setConfig):
setConfig() {
local configName="$1"
local configValue="$2" # This correctly receives the value
local newConfigs=()
for config in "${configs[@]}"; do
name="${config%%:}"
value="${config#:}"
if [[ "$name" == "$configName" ]]; then
value="$configValue" # Use the passed value directly
echo "$name: $value"
fi
set var $nameConfig to $value
eval ${name}Config=$valuenewConfigs+=("$name:$value")
done
configs=("${newConfigs[@]}")
setConfigs
}
Conclusion
This is an impressive and ambitious Bash script that pushes the boundaries of what can be done with shell scripting for AI interaction. The "toolshed" concept is particularly powerful. However, the critical security vulnerabilities related to arbitrary command execution (especially the ! command and AI-driven tool execution) must be addressed immediately if this script is to be used in any shared or untrusted environment. Once those are mitigated, further refinements in error handling and Bash best practices will make it even more robust and user-friendly.
Beta Was this translation helpful? Give feedback.
All reactions