Creating a ClickHouse source
Learn how to create a ClickHouse source to query logs from ClickHouse tables.
Prerequisites
Section titled “Prerequisites”You need an existing ClickHouse connection with connection_use permission. See Creating a ClickHouse connection for setup instructions.
Step 0: Open form
Section titled “Step 0: Open form”Navigate to Sources → +Create.
Step 1: Connection
Section titled “Step 1: Connection”Choose an existing ClickHouse connection from the dropdown and configure:
- Database – Database name in ClickHouse
- Table – Table name containing your logs
- Query settings – Optional ClickHouse SETTINGS clause appended to all queries
- Comma-separated key=value pairs (e.g.,
max_threads=4, max_memory_usage=10000000000) - Applied to all queries executed on this source
- Useful for performance tuning or enforcing query limits
- See ClickHouse SETTINGS documentation for available options
- Comma-separated key=value pairs (e.g.,
Step 2: Columns
Section titled “Step 2: Columns”Add and configure columns for your source.
Recommended approach:
Click “Autoload columns” to automatically load column definitions from the ClickHouse schema. This ensures column types are correct and saves manual configuration.
Manual configuration:
Click “Add Column” to add columns one by one if autoloading is not available or you want custom configuration.
Column properties:
Each column has several properties:
- Name – Column name in the database table
- Display name – Column name shown in the data explorer
- Type – Column type based on ClickHouse data type (used to determine eligible time columns)
- Treat as JSON string – Whether to parse this column as JSON in results
- Autocomplete – Enable autocompletion for this column in query input
- Suggest – Suggest this column in query input
- Values – Comma-separated list of predefined values (for
enumtype only)
Step 3: Settings
Section titled “Step 3: Settings”Configure which columns have special roles:
- Time column – Used for filtering logs by time range (required)
- Should match your ClickHouse table’s partition key for optimal performance
- Eligible column types:
datetime,timestamp,UInt64,Int64
- Date column – Additional date column if your schema separates date and time (optional)
- Severity column – Used for colored log bars and default graph grouping (optional)
- Configure this if your logs have a dedicated severity/level column
- Default chosen columns – Columns shown by default in the results table (required)
- Execute query on open – Controls whether queries run automatically when opening the explorer, or if the user must press “Execute” button explicitly
Step 4: Naming
Section titled “Step 4: Naming”Specify source identification:
- Slug – Unique identifier (cannot be changed after creation)
- Used as human-readable source identifier in URLs
- Must be a valid Django slug
- Name – Human-readable source name (e.g., “Production App Logs”, “Staging API Logs”)
- Description – Optional description explaining what data this source provides
Step 5: Review & Create
Section titled “Step 5: Review & Create”Review your configuration and click “Create” to save the source.
Best practices
Section titled “Best practices”- Autoload columns when possible to ensure accuracy
- Use time column that matches your ClickHouse partition key for optimal query performance
- Only expose columns needed for log analysis to keep the interface clean
- Configure severity column if your logs have severity/level information
- Use descriptive names indicating data type and environment (e.g., “Production App Logs”)
Related documentation
Section titled “Related documentation”- Source concepts – Understanding sources and their role
- ClickHouse source details – Technical details
- Connection configuration – Setting up connections