Documentation Index
Fetch the complete documentation index at: https://docs.tablepro.app/llms.txt
Use this file to discover all available pages before exploring further.
Google BigQuery Connections
TablePro connects to BigQuery via its REST API. Browse datasets and tables, run GoogleSQL queries, edit rows in the data grid. Plugin auto-installs or grab it from Settings > Plugins > Browse.Quick Setup
Click New Connection, select BigQuery, pick an auth method, enter your Project ID, connect.Authentication
Service Account Key: Point to a.json key file from Google Cloud Console (IAM > Service Accounts > Keys), or paste the raw JSON content directly into the field.
Application Default Credentials: Uses cached credentials from gcloud CLI. Supports authorized_user, service_account, and impersonated_service_account credential types. Run this first:
- Go to Google Cloud Console > APIs & Services > Credentials
- Click “Create Credentials” > “OAuth client ID”
- Select “Desktop app” as application type
- Copy the Client ID and Client Secret into TablePro
- On first connect, your browser opens for Google authorization
- After approving, TablePro receives the token automatically
Connection Settings
| Field | Required | Notes |
|---|---|---|
| Auth Method | Yes | Service Account Key, ADC, or Google Account (OAuth) |
| Project ID | Yes | e.g. my-project-123456 |
| Service Account Key | SA only | Path to .json key file, or raw JSON content |
| OAuth Client ID | OAuth only | From GCP Console > APIs & Services > Credentials |
| OAuth Client Secret | OAuth only | From GCP Console > APIs & Services > Credentials |
| Location | No | Processing location: US, EU, us-central1, etc. |
| Max Bytes Billed | No | Cost cap per query in bytes (Advanced tab). Queries exceeding this limit will fail instead of billing. |
Features
Dataset Browsing: The first dataset is auto-selected on connect. Switch datasets with ⌘K or the database switcher. Datasets show as schemas in the sidebar. Table Structure: Columns with full BigQuery types:STRUCT<name STRING, age INT64>, ARRAY<STRING>, nullable status, field descriptions. Clustering and partitioning info in the Indexes tab.
GoogleSQL Queries (docs):
Processed: 1.5 MB | Billed: 10 MB | ~$0.0001). Use the Dry Run option (Explain dropdown > “Dry Run (Cost)”) to check cost before executing.
Data Types: INT64, FLOAT64, NUMERIC, BIGNUMERIC, BOOL, STRING, BYTES, DATE, TIME, DATETIME, TIMESTAMP, GEOGRAPHY, JSON, STRUCT, ARRAY, RANGE. Complex types (STRUCT/ARRAY) display as JSON.
Data Editing: Edit cells, insert/delete rows. DML example:
CREATE SCHEMA), add/drop columns (ALTER TABLE), create views (CREATE OR REPLACE VIEW). Table DDL viewable via INFORMATION_SCHEMA.
Export: CSV, JSON, SQL, XLSX formats.
IAM Permissions
Minimum roles:roles/bigquery.user— run queriesroles/bigquery.dataViewer— browse and read dataroles/bigquery.dataEditor— INSERT, UPDATE, DELETE
Troubleshooting
Auth failed: Check key file path or rungcloud auth application-default login.
Permission denied: Ensure bigquery.user role is granted on the project.
Project not found: Use the Project ID (not name or number).
Query timeout: BigQuery runs jobs async. The plugin polls up to 5 minutes (configurable).
Cost: Use LIMIT, select specific columns, prefer partitioned tables. Set Max Bytes Billed to prevent expensive queries. Table browsing caps rows automatically.
No tables after connect: Switch datasets with ⌘K. The first dataset is auto-selected, but if it’s empty, switch to one with tables.
Limitations
- No SSH tunneling (HTTPS only to BigQuery API)
- No transactions
- STRUCT/ARRAY columns excluded from UPDATE/DELETE WHERE clauses
- Deep pagination (large OFFSET) scans from start. Use filters to narrow results
- No streaming inserts
- OAuth tokens are session-only (re-authorize after disconnect)
