Skip to main content

ClickHouse Connections

TablePro supports ClickHouse via its HTTP interface. ClickHouse is a column-oriented OLAP database built for real-time analytics on large datasets. TablePro connects over HTTP (port 8123 by default), not the native TCP protocol.

Quick Setup

Click New Connection, select ClickHouse, enter host/port/credentials/database, and click Create. Plugin auto-installs or use Settings > Plugins > Browse > ClickHouse Driver.

Connection Settings

FieldDefaultNotes
Hostlocalhost
Port8123HTTP port (not 9000 native TCP)
UsernamedefaultSet password in production
Databasedefault
Uses HTTP API (HTTPS on port 8443 with SSL/TLS enabled). Cloud providers typically expose HTTP(S) only.

Example Configurations

Local: host localhost:8123, user default, empty password Docker: clickhouse/clickhouse-server:latest, set CLICKHOUSE_USER/CLICKHOUSE_PASSWORD env ClickHouse Cloud: Port 8443, enable SSL/TLS Remote: Use SSH tunneling for unencrypted HTTP SSL/TLS: Enable in connection form for HTTPS (port 8443). Cloud requires HTTPS.

Features

Database Browsing

After connecting, the sidebar lists all databases on the server. Expand a database to see its tables and views. Switch databases using the database switcher in the toolbar.

Table Browsing

For each table, TablePro shows:
  • Structure: Columns with ClickHouse data types, default expressions, and comments
  • Indexes: Data skipping indices (minmax, set, bloom_filter, etc.)
  • DDL: The full CREATE TABLE statement including engine and settings
  • Parts: Partition and part details from system.parts (rows, disk size, active status)
Engines:
EngineUse Case
MergeTreePrimary engine for production analytics tables
ReplacingMergeTreeDeduplication by sorting key
SummingMergeTreePre-aggregation of numeric columns
AggregatingMergeTreeIncremental aggregation with AggregateFunction columns
Log / TinyLogSmall tables, development, and testing
MemoryIn-memory tables for temporary data
DistributedQueries across shards in a cluster
Sorting/Partition Keys: Critical for performance (ClickHouse reads sorted order).

Data Skipping Indices

ClickHouse supports secondary indices that skip granules during queries. TablePro shows these in the Indexes tab:
-- Example: bloom filter index on a URL column
ALTER TABLE hits ADD INDEX url_idx url TYPE bloom_filter GRANULARITY 4;

Query Progress Tracking

During query execution, the toolbar shows live progress: rows read and bytes processed. After completion, a summary displays total execution time, rows read, and bytes read. This data is polled from system.processes.

EXPLAIN Variants

ClickHouse supports multiple EXPLAIN modes. Click the Explain dropdown in the query editor to choose:
VariantDescription
PlanLogical query plan (default)
PipelinePhysical execution pipeline with thread/port info
ASTAbstract syntax tree of the parsed query
SyntaxQuery after syntax optimizations
EstimateEstimated rows, marks, and parts to read

Server-side Query Cancellation

When you cancel a running query, TablePro sends a KILL QUERY command to the ClickHouse server. This stops the query on the server side, not just the HTTP connection.

Query Editor

Write and execute ClickHouse SQL queries in the editor:
-- Aggregation query
SELECT
    toDate(event_time) AS date,
    count() AS events,
    uniqExact(user_id) AS unique_users
FROM analytics.events
WHERE event_time >= now() - INTERVAL 7 DAY
GROUP BY date
ORDER BY date;

-- System tables
SELECT name, engine, total_rows, total_bytes
FROM system.tables
WHERE database = 'analytics';

-- Table partitions
SELECT partition, sum(rows) AS rows, formatReadableSize(sum(bytes_on_disk)) AS size
FROM system.parts
WHERE table = 'events' AND active
GROUP BY partition
ORDER BY partition;

Data Editing

TablePro supports editing cell values, inserting rows, and deleting rows in ClickHouse tables. Edits are submitted as standard INSERT, ALTER TABLE UPDATE, and ALTER TABLE DELETE statements.
ClickHouse mutations (UPDATE and DELETE) are asynchronous. They execute in the background and may take time to complete on large tables. See the Limitations section for details.

Export and Import

Export query results or table data to CSV, JSON, and other formats. Import data from CSV files into ClickHouse tables.

System Databases

ClickHouse includes built-in system databases:
DatabaseDescription
systemServer metrics, query logs, parts info, cluster state
information_schemaSQL-standard metadata views (tables, columns, schemata)
INFORMATION_SCHEMAAlias for information_schema (case-sensitive variant)
The system database is particularly useful for monitoring:
-- Recent queries
SELECT query, read_rows, elapsed, memory_usage
FROM system.query_log
WHERE type = 'QueryFinish'
ORDER BY event_time DESC
LIMIT 20;

-- Disk usage by table
SELECT database, table, formatReadableSize(sum(bytes_on_disk)) AS size
FROM system.parts
WHERE active
GROUP BY database, table
ORDER BY sum(bytes_on_disk) DESC;

Troubleshooting

Connection Refused

Symptoms: “Connection refused” or timeout Causes and Solutions:
  1. ClickHouse not running or HTTP interface disabled
    # Check if ClickHouse is listening on port 8123
    curl http://localhost:8123/ping
    # Expected response: "Ok."
    
  2. Wrong port
    • HTTP interface: 8123 (default)
    • HTTPS interface: 8443
    • Native TCP: 9000 (not used by TablePro)
  3. HTTP interface disabled in config
    • Check <http_port> in /etc/clickhouse-server/config.xml

Authentication Failed

Symptoms: “Authentication failed” or HTTP 403 Solutions:
  1. Verify username and password
  2. Check user exists in ClickHouse:
    SELECT name, auth_type FROM system.users;
    
  3. Verify the user has access to the target database:
    SHOW GRANTS FOR app_user;
    

Connection Timeout

Symptoms: Connection hangs or times out Solutions:
  1. Verify host and port are correct
  2. Check network connectivity and firewall rules
  3. For cloud-hosted ClickHouse, ensure your IP is in the allowed list

Known Limitations

  • No foreign keys or multi-statement transactions
  • No auto-increment; primary key and sorting key are immutable after creation
  • UPDATE/DELETE run as asynchronous background mutations via ALTER TABLE — check progress with SELECT * FROM system.mutations WHERE is_done = 0
  • Designed for batch inserts, not single-row writes