ClickHouse Connections
TablePro supports ClickHouse via its HTTP interface. ClickHouse is a column-oriented OLAP database built for real-time analytics on large datasets. TablePro connects over HTTP (port 8123 by default), not the native TCP protocol.
Quick Setup
Click New Connection, select ClickHouse, enter host/port/credentials/database, and click Create. Plugin auto-installs or use Settings > Plugins > Browse > ClickHouse Driver.
Connection Settings
| Field | Default | Notes |
|---|
| Host | localhost | |
| Port | 8123 | HTTP port (not 9000 native TCP) |
| Username | default | Set password in production |
| Database | default | |
Uses HTTP API (HTTPS on port 8443 with SSL/TLS enabled). Cloud providers typically expose HTTP(S) only.
Example Configurations
Local: host localhost:8123, user default, empty password
Docker: clickhouse/clickhouse-server:latest, set CLICKHOUSE_USER/CLICKHOUSE_PASSWORD env
ClickHouse Cloud: Port 8443, enable SSL/TLS
Remote: Use SSH tunneling for unencrypted HTTP
SSL/TLS: Enable in connection form for HTTPS (port 8443). Cloud requires HTTPS.
Features
Database Browsing
After connecting, the sidebar lists all databases on the server. Expand a database to see its tables and views. Switch databases using the database switcher in the toolbar.
Table Browsing
For each table, TablePro shows:
- Structure: Columns with ClickHouse data types, default expressions, and comments
- Indexes: Data skipping indices (minmax, set, bloom_filter, etc.)
- DDL: The full CREATE TABLE statement including engine and settings
- Parts: Partition and part details from
system.parts (rows, disk size, active status)
Engines:
| Engine | Use Case |
|---|
| MergeTree | Primary engine for production analytics tables |
| ReplacingMergeTree | Deduplication by sorting key |
| SummingMergeTree | Pre-aggregation of numeric columns |
| AggregatingMergeTree | Incremental aggregation with AggregateFunction columns |
| Log / TinyLog | Small tables, development, and testing |
| Memory | In-memory tables for temporary data |
| Distributed | Queries across shards in a cluster |
Sorting/Partition Keys: Critical for performance (ClickHouse reads sorted order).
Data Skipping Indices
ClickHouse supports secondary indices that skip granules during queries. TablePro shows these in the Indexes tab:
-- Example: bloom filter index on a URL column
ALTER TABLE hits ADD INDEX url_idx url TYPE bloom_filter GRANULARITY 4;
Query Progress Tracking
During query execution, the toolbar shows live progress: rows read and bytes processed. After completion, a summary displays total execution time, rows read, and bytes read. This data is polled from system.processes.
EXPLAIN Variants
ClickHouse supports multiple EXPLAIN modes. Click the Explain dropdown in the query editor to choose:
| Variant | Description |
|---|
| Plan | Logical query plan (default) |
| Pipeline | Physical execution pipeline with thread/port info |
| AST | Abstract syntax tree of the parsed query |
| Syntax | Query after syntax optimizations |
| Estimate | Estimated rows, marks, and parts to read |
Server-side Query Cancellation
When you cancel a running query, TablePro sends a KILL QUERY command to the ClickHouse server. This stops the query on the server side, not just the HTTP connection.
Query Editor
Write and execute ClickHouse SQL queries in the editor:
-- Aggregation query
SELECT
toDate(event_time) AS date,
count() AS events,
uniqExact(user_id) AS unique_users
FROM analytics.events
WHERE event_time >= now() - INTERVAL 7 DAY
GROUP BY date
ORDER BY date;
-- System tables
SELECT name, engine, total_rows, total_bytes
FROM system.tables
WHERE database = 'analytics';
-- Table partitions
SELECT partition, sum(rows) AS rows, formatReadableSize(sum(bytes_on_disk)) AS size
FROM system.parts
WHERE table = 'events' AND active
GROUP BY partition
ORDER BY partition;
Data Editing
TablePro supports editing cell values, inserting rows, and deleting rows in ClickHouse tables. Edits are submitted as standard INSERT, ALTER TABLE UPDATE, and ALTER TABLE DELETE statements.
ClickHouse mutations (UPDATE and DELETE) are asynchronous. They execute in the background and may take time to complete on large tables. See the Limitations section for details.
Export and Import
Export query results or table data to CSV, JSON, and other formats. Import data from CSV files into ClickHouse tables.
System Databases
ClickHouse includes built-in system databases:
| Database | Description |
|---|
| system | Server metrics, query logs, parts info, cluster state |
| information_schema | SQL-standard metadata views (tables, columns, schemata) |
| INFORMATION_SCHEMA | Alias for information_schema (case-sensitive variant) |
The system database is particularly useful for monitoring:
-- Recent queries
SELECT query, read_rows, elapsed, memory_usage
FROM system.query_log
WHERE type = 'QueryFinish'
ORDER BY event_time DESC
LIMIT 20;
-- Disk usage by table
SELECT database, table, formatReadableSize(sum(bytes_on_disk)) AS size
FROM system.parts
WHERE active
GROUP BY database, table
ORDER BY sum(bytes_on_disk) DESC;
Troubleshooting
Connection Refused
Symptoms: “Connection refused” or timeout
Causes and Solutions:
-
ClickHouse not running or HTTP interface disabled
# Check if ClickHouse is listening on port 8123
curl http://localhost:8123/ping
# Expected response: "Ok."
-
Wrong port
- HTTP interface: 8123 (default)
- HTTPS interface: 8443
- Native TCP: 9000 (not used by TablePro)
-
HTTP interface disabled in config
- Check
<http_port> in /etc/clickhouse-server/config.xml
Authentication Failed
Symptoms: “Authentication failed” or HTTP 403
Solutions:
- Verify username and password
- Check user exists in ClickHouse:
SELECT name, auth_type FROM system.users;
- Verify the user has access to the target database:
SHOW GRANTS FOR app_user;
Connection Timeout
Symptoms: Connection hangs or times out
Solutions:
- Verify host and port are correct
- Check network connectivity and firewall rules
- For cloud-hosted ClickHouse, ensure your IP is in the allowed list
Known Limitations
- No foreign keys or multi-statement transactions
- No auto-increment; primary key and sorting key are immutable after creation
- UPDATE/DELETE run as asynchronous background mutations via
ALTER TABLE — check progress with SELECT * FROM system.mutations WHERE is_done = 0
- Designed for batch inserts, not single-row writes