A command-line tool that converts natural language instructions into shell commands using AI. Simply describe what you want to do in plain English, and nlsh will generate and execute the appropriate shell command.
- 🧠 Natural language to shell command conversion
- 🤖 Multiple AI backends: OpenAI GPT and Google Gemini
- 🛡️ Built-in safety checks for dangerous commands
- ⚙️ Configurable settings via
.nlshrc - 🎨 Colored output for better readability
- 📝 Command history and context awareness
- 🔄 Interactive and single command modes
- 🔒 Confirmation for potentially dangerous operations
- Go 1.24 or later
- OpenAI API key or Google Gemini API key
Install directly using curl:
curl -fsSL https://raw.githubusercontent.com/abakermi/nlsh/master/install.sh | bashgo install github.com/abakermi/nlsh@latest- Clone the repository:
git clone https://github.com/abakermi/nlsh.git cd nlsh - Set your API key as an environment variable:
export OPENAI_API_KEY='your-api-key-here' # or export GEMINI_API_KEY='your-api-key-here'
- Run the installation script:
./install.sh
- Restart your terminal or source your shell configuration:
source ~/.zshrc # or source ~/.bashrc
# For OpenAI
export OPENAI_API_KEY='your-api-key-here'
# For Gemini
export GEMINI_API_KEY='your-api-key-here'nlshnlsh "list all files in current directory"# List files
nlsh "show me all hidden files"
# Git operations
nlsh "commit all changes with message 'update readme'"
# Docker operations
nlsh "show all running containers"You can customize nlsh's behavior by creating a .nlshrc file in your home directory. The configuration file supports TOML format.
Set the backend option to choose your AI provider:
# Backend to use: "openai" or "gemini"
backend = "gemini"You can use local models compatible with the OpenAI API by configuring base_url:
[openai]
model = "llama3" # Replace with your local model name
base_url = "http://localhost:11434/v1" # Example for OllamaIf base_url is set, OPENAI_API_KEY is not required.
# Backend to use: "openai" or "gemini"
backend = "openai"
[openai]
model = "gpt-4-turbo-preview"
temperature = 0.7
[gemini]
model = "gemini-2.0-flash"
temperature = 0.7
[safety]
confirm_execution = true
allowed_commands = [
"ls *",
"touch *",
"mkdir *",
"echo *",
"cat *",
"cp *",
"mv *",
"git *",
"docker *",
"code *",
"vim *",
"nano *"
]
denied_commands = [
"rm -rf /*",
"rm -rf /",
"dd if=/dev/*",
"mkfs.*",
"> /dev/*",
"shutdown *",
"reboot *",
"halt *",
"*--no-preserve-root*"
]- Command confirmation before execution
- Configurable allowed/denied commands
- Pattern-based command filtering
- Protection against dangerous operations
This project is open source and available under the MIT License.
