A Neovim plugin that integrates DuckDB via LuaJIT FFI to enable SQL queries on CSV and JSON files directly from buffers.
- Direct Buffer Queries: Query CSV, JSON, and JSONL data without saving files
- Multi-Buffer Joins: Query and join data across multiple buffers
- Query History: Persistent history with search and re-execution
- SQL Scratch Buffer: Dedicated buffer for writing and executing queries
- Result Actions: Filter, sort, export, and yank results in multiple formats
- Schema Statistics: SUMMARIZE stats and column-level hover information
- Data Validation: Inline diagnostics for CSV/JSON parsing errors
- Format Conversion: Convert between CSV, JSON, JSONL, and Parquet
- HTTP Export: POST query results directly to APIs
- Telescope Integration: Optional pickers for history and buffers
Step-by-step guides to get started with nvim-duckdb.
Goal: Query a CSV file and view results.
-
Install DuckDB:
# macOS brew install duckdb # Ubuntu/Debian sudo apt install libduckdb-dev
-
Install the plugin with lazy.nvim:
{ 'hello-world-bfree/nvim-duckdb', config = function() require('duckdb').setup() end } -
Create a test file
employees.csv:name,age,department,salary Alice,30,Engineering,95000 Bob,25,Marketing,65000 Charlie,35,Engineering,105000
-
Open the file and run:
:DuckDB SELECT * FROM buffer WHERE age > 28
-
A floating window shows filtered results. Press
qto close.
Goal: Write multi-line queries in a dedicated SQL buffer.
-
Open the scratch buffer:
:DuckDBScratch
-
Write a query (the buffer persists across sessions):
SELECT department, COUNT(*) as headcount, AVG(salary) as avg_salary FROM buffer GROUP BY department ORDER BY avg_salary DESC;
-
Position cursor on the query and press
<CR>to execute. -
The query executes against the last active data buffer.
Goal: Filter, sort, and export query results.
-
Run a query:
:DuckDB SELECT * FROM buffer
-
In the result window, use these keys:
f— Add a filter (prompts for WHERE clause)s— Sort by column under cursorya— Yank as JSON arrayyc— Yank as CSVe— Export to filer— Re-run the query
-
Try filtering: press
f, entersalary > 80000, see filtered results.
Goal: Browse and re-execute previous queries.
-
Run several queries to build history.
-
Open history picker:
:DuckDBHistory
-
Select a query to re-execute it, or browse with arrow keys.
-
History persists across Neovim sessions.
Task-oriented guides for specific workflows.
Open two files: orders.csv and customers.csv, then:
:DuckDB SELECT c.name, COUNT(o.id) as orders
FROM buffer('customers.csv') c
JOIN buffer('orders.csv') o ON c.id = o.customer_id
GROUP BY c.nameBuffer references:
buffer— current bufferbuffer('filename.csv')— buffer by namebuffer(5)— buffer by number
Check CSV/JSON for parsing errors:
:DuckDBValidateErrors appear as inline diagnostics. Navigate with ]d and [d.
Clear diagnostics:
:DuckDBClearValidationOn a CSV/JSON buffer, press K on any line to see column stats (count, nulls, min, max, unique values).
Or run SUMMARIZE for full statistics:
:DuckDBSummaryConvert the current buffer to another format:
:DuckDBConvert json " Auto-names output
:DuckDBConvert parquet output.parquet " Specify pathSupported formats: csv, json, jsonl, parquet
After running a query, POST results to a URL:
:DuckDBPost https://api.example.com/dataSelect JSON format (array, object keyed by first column, or single row).
Preview changes before applying:
:DuckDBTransform SELECT name, salary * 1.1 as new_salary FROM bufferA diff view shows original vs. transformed. Choose "Apply changes" or "Cancel".
If Telescope is installed:
-- In your config
require('duckdb.integrations.telescope').setup()
-- Then use
:Telescope duckdb history
:Telescope duckdb buffersSelect SQL text in any buffer and press <leader>dq to execute it.
require('duckdb').query('SELECT * FROM buffer', {
export = '/tmp/results.csv',
format = 'csv' -- or 'json', 'table'
})vim.g.duckdb_no_default_keymaps = 1Complete reference for commands, keymaps, configuration, and API.
| Command | Description |
|---|---|
:DuckDB <query> |
Execute SQL query on buffer |
:DuckDBSchema [buffer] |
Show buffer schema |
:DuckDBBuffers |
List queryable buffers |
:DuckDBValidate [buffer] |
Validate CSV/JSON with inline diagnostics |
:DuckDBScratch |
Open persistent SQL scratch buffer |
:DuckDBHistory |
Browse and re-run query history |
:DuckDBSummary [buffer] |
Show SUMMARIZE statistics |
:DuckDBHover |
Show column stats under cursor |
:DuckDBConvert <format> [path] |
Convert to csv/json/jsonl/parquet |
:DuckDBPost <url> |
POST query results to URL |
:DuckDBTransform [query] |
Transform with diff preview |
| Key | Mode | Description |
|---|---|---|
<leader>dq |
n | Query prompt |
<leader>dq |
v | Execute visual selection |
<leader>ds |
n | Show schema |
<leader>dv |
n | Validate buffer |
<leader>db |
n | List buffers |
<leader>dp |
n | Preview (LIMIT 100) |
<leader>d1 |
n | Preview (LIMIT 10) |
<leader>d5 |
n | Preview (LIMIT 50) |
<leader>da |
n | Select all |
<leader>dn |
n | Count rows |
<leader>dh |
n | Query history |
<leader>dS |
n | Open scratch buffer |
<leader>du |
n | Show summary stats |
<leader>dt |
n | Transform with diff |
K |
n | Column stats (CSV/JSON buffers) |
| Key | Description |
|---|---|
f |
Filter (add WHERE clause) |
s |
Sort by column under cursor |
ya |
Yank as JSON array |
yo |
Yank JSON (choose format) |
yc |
Yank as CSV |
e |
Export to file |
r |
Re-run query |
q / <Esc> |
Close window |
| Key | Description |
|---|---|
<CR> |
Execute statement at cursor |
<C-CR> |
Execute statement at cursor |
require('duckdb').setup({
-- Display
max_rows = 1000, -- Maximum rows to display
max_col_width = 50, -- Maximum column width
auto_close = false, -- Auto-close result window
default_format = 'table', -- Export format: 'csv', 'json', 'table'
-- History
history_limit = 500, -- Maximum history entries
-- Scratch buffer
scratch_path = nil, -- Custom path (nil = stdpath('cache')/duckdb_scratch.sql)
-- Hover/inline features
hover_stats = true, -- Enable K mapping for column stats
inline_preview = true, -- Enable inline result previews
inline_preview_debounce_ms = 500,
})local duckdb = require('duckdb')
-- Execute query with options
duckdb.query(query, {
buffer = nil, -- Buffer identifier (number, name, or nil)
display = 'float', -- 'float', 'split', or 'none'
export = nil, -- Export file path
format = nil, -- Export format
title = nil, -- Window title
skip_history = false -- Don't add to history
})
-- Get results as Lua table
local rows, err = duckdb.query_as_table('SELECT * FROM buffer')
for _, row in ipairs(rows) do
print(row.name, row.salary)
end
-- Interactive query prompt
duckdb.query_prompt()
-- Execute visual selection
duckdb.query_visual()-- Get schema
duckdb.get_schema(buffer_id)
-- List queryable buffers
local buffers = duckdb.list_queryable_buffers()
-- Validate buffer
local result, err = duckdb.validate(buffer_id, {
show_diagnostics = true,
show_float = true
})
-- Clear validation diagnostics
duckdb.clear_validation(buffer_id)-- Show history picker
duckdb.history()
-- Clear history
duckdb.clear_history()
-- Open scratch buffer
duckdb.scratch()-- Show SUMMARIZE stats
duckdb.summary(buffer_id)
-- Show column stats popup
duckdb.hover(buffer_id, column_name)-- Convert format
duckdb.convert('parquet', '/tmp/output.parquet')
-- POST to URL (from result buffer)
duckdb.post('https://api.example.com/data')
-- Transform with diff preview
duckdb.transform('SELECT * FROM buffer WHERE active = true')local history = require('duckdb.history')
-- Add entry
history.add({
query = 'SELECT * FROM buffer',
timestamp = os.time(),
row_count = 100,
execution_time_ms = 50,
buffer_name = 'data.csv'
})
-- Get entries
local entries = history.get({ limit = 10, search = 'SELECT' })
-- Search
local results = history.search('users', 20)
-- Recent queries
local recent = history.recent(5)
-- Clear all
history.clear()local actions = require('duckdb.actions')
-- Format result as JSON
local json_array = actions.format_json_array(result)
local json_object = actions.format_json_object(result)
local json_single = actions.format_json_single(result)
local csv = actions.format_csv(result){
columns = {'name', 'age'}, -- Column names
rows = {{'Alice', 30}}, -- Row data
row_count = 1, -- Number of rows
column_count = 2, -- Number of columns
rows_changed = 0 -- For DML statements
}{
valid = true,
errors = {
{ line = 3, column = 5, message = 'Invalid value', severity = 'error' }
},
warnings = {
{ line = 7, message = 'Inconsistent column count', severity = 'warning' }
}
}- Neovim 0.7+ (includes LuaJIT)
- DuckDB 0.9.0+ shared library (
libduckdb)
macOS:
brew install duckdbUbuntu/Debian:
sudo apt install libduckdb-devArch Linux:
sudo pacman -S duckdbFrom Source: Download from DuckDB Downloads
lazy.nvim:
{
'hello-world-bfree/nvim-duckdb',
config = function()
require('duckdb').setup()
end
}packer.nvim:
use {
'hello-world-bfree/nvim-duckdb',
config = function()
require('duckdb').setup()
end
}vim-plug:
Plug 'hello-world-bfree/nvim-duckdb'
lua << EOF
require('duckdb').setup()
EOF:checkhealth duckdbEnsure libduckdb is installed and accessible:
ldconfig -p | grep duckdb # Linux
ls /usr/local/lib | grep duckdb # macOSUse Neovim, not Vim. Neovim includes LuaJIT by default.
Check table references:
bufferfor current bufferbuffer('name')for named buffersbuffer(5)for buffer number
DuckDB uses read_csv_auto with auto-detection. For edge cases, use :DuckDBValidate to identify issues.
History is stored in stdpath('data')/duckdb_history.json. Check write permissions.
