DuckDB CLI specialist for SQL analysis, data processing and file conversion. Use for SQL queries, CSV/Parquet/JSON analysis, database queries, or data conversion. Triggers on "duckdb", "sql", "query", "data analysis", "parquet", "convert data".
Helps with data analysis, SQL queries and file conversion via DuckDB CLI.
# CSV
duckdb -c "SELECT * FROM 'data.csv' LIMIT 10"
# Parquet
duckdb -c "SELECT * FROM 'data.parquet'"
# Multiple files with glob
duckdb -c "SELECT * FROM read_parquet('logs/*.parquet')"
# JSON
duckdb -c "SELECT * FROM read_json_auto('data.json')"
# Create/open database
duckdb my_database.duckdb
# Read-only mode
duckdb -readonly existing.duckdb
| Flag | Format |
|---|---|
-csv | Comma-separated |
-json | JSON array |
-table | ASCII table |
-markdown | Markdown table |
-html | HTML table |
-line | One value per line |
| Argument | Description |
|---|---|
-c COMMAND | Run SQL and exit |
-f FILENAME | Run script from file |
-init FILE | Use alternative to ~/.duckdbrc |
-readonly | Open in read-only mode |
-echo | Show commands before execution |
-bail | Stop on first error |
-header / -noheader | Show/hide column headers |
-nullvalue TEXT | Text for NULL values |
-separator SEP | Column separator |
duckdb -c "COPY (SELECT * FROM 'input.csv') TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'input.parquet') TO 'output.csv' (HEADER, DELIMITER ',')"
duckdb -c "COPY (SELECT * FROM read_json_auto('input.json')) TO 'output.parquet' (FORMAT PARQUET)"
duckdb -c "COPY (SELECT * FROM 'data.csv' WHERE amount > 1000) TO 'filtered.parquet' (FORMAT PARQUET)"
| Command | Description |
|---|---|
.tables [pattern] | Show tables (with LIKE pattern) |
.schema [table] | Show CREATE statements |
.databases | Show attached databases |
| Command | Description |
|---|---|
.mode FORMAT | Change output format |
.output file | Send output to file |
.once file | Next output to file |
.headers on/off | Show/hide column headers |
.separator COL ROW | Set separators |
| Command | Description |
|---|---|
.timer on/off | Show execution time |
.echo on/off | Show commands before execution |
.bail on/off | Stop on error |
.read file.sql | Run SQL from file |
| Command | Description |
|---|---|
.edit or \e | Open query in external editor |
.help [pattern] | Show help |
| Shortcut | Action |
|---|---|
Home / End | Start/end of line |
Ctrl+Left/Right | Jump word |
Ctrl+A / Ctrl+E | Start/end of buffer |
| Shortcut | Action |
|---|---|
Ctrl+P / Ctrl+N | Previous/next command |
Ctrl+R | Search history |
Alt+< / Alt+> | First/last in history |
| Shortcut | Action |
|---|---|
Ctrl+W | Delete word backward |
Alt+D | Delete word forward |
Alt+U / Alt+L | Uppercase/lowercase word |
Ctrl+K | Delete to end of line |
| Shortcut | Action |
|---|---|
Tab | Autocomplete / next suggestion |
Shift+Tab | Previous suggestion |
Esc+Esc | Undo autocomplete |
Context-aware autocomplete activated with Tab:
CREATE TABLE sales AS SELECT * FROM 'sales_2024.csv';
INSERT INTO sales SELECT * FROM 'sales_2025.csv';
COPY sales TO 'backup.parquet' (FORMAT PARQUET);
SELECT
COUNT(*) as count,
AVG(amount) as average,
SUM(amount) as total
FROM 'transactions.csv';
SELECT
category,
COUNT(*) as count,
SUM(amount) as total
FROM 'data.csv'
GROUP BY category
ORDER BY total DESC;
SELECT a.*, b.name
FROM 'orders.csv' a
JOIN 'customers.parquet' b ON a.customer_id = b.id;
DESCRIBE SELECT * FROM 'data.csv';
# Read from stdin
cat data.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"
# Pipe to another command
duckdb -csv -c "SELECT * FROM 'data.parquet'" | head -20
# Write to stdout
duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"
Save common settings in ~/.duckdbrc:
.timer on
.mode duckbox
.maxrows 50
.highlight on
.keyword green
.constant yellow
.comment brightblack
.error red
Open complex queries in your editor:
.edit
Editor is chosen from: DUCKDB_EDITOR → EDITOR → VISUAL → vi
Secure mode that restricts file access. When enabled:
.read, .output, .import, .sh etc.LIMIT on large files for quick previewread_csv_auto and read_json_auto guess column typesmemory_limit values on some Ubuntu versionsSpring Boot中的JPA/Hibernate模式,用于实体设计、关系处理、查询优化、事务管理、审计、索引、分页和连接池。