Monitoring usage and system resources

Monitor system usage, dataset growth, user activity, and refresh performance using Postgres-based SQL queries. Build dashboards to track adoption and resource trends.

This guide helps Tellius admins track system usage, monitor dataset growth, audit user activity, and optimize platform performance. By running the provided SQL queries via a secure Postgres data source, you can create reusable datasets and dashboards that offer visibility into how Tellius is being used across your organization.

These queries require access to the system metadata database. Contact your Tellius admin or support team to set up a secure, validated Postgres connection.

Connecting to the metadata database

To run the queries in this guide, you’ll need to create a Postgres data source that connects to your Tellius instance’s internal metadata store.

  1. Under Data → Connect, choose Postgres as the connection type.

  2. Provide the host, port, database name, username, and password (your Tellius admin or support team will supply these).

  3. Save and test the connection.

  4. Once connected, you can use the Custom SQL option in the dataset creation flow to execute the queries below.

Setting up the metadata connection

To run system-level queries (like usage tracking, dataset stats, user activity, etc.), you'll need to connect to Tellius internal metadata databases using a Postgres data source.

Databases you may connect to:

  • middleware_prod – Stores core platform metadata such as users, datasets, Business Views, and configurations.

  • middleware_tracking_prod – (Optional) May contain event tracking or usage-related metadata depending on deployment.

  • usage_tracker – Logs detailed activity, such as search queries, refresh events, Vizpad usage, and user logins.

Metadata schema overview

Once connected, you can query internal tables that define user access, roles, and group mappings in your Tellius environment.

Table
Description

users

Stores all registered users and their metadata (e.g., name, email, roles)

groups

Represents logical user groups used for access control

users_groups

Maps which users belong to which groups

Monitoring datasets

This query lists each dataset in the system with key metadata such as name, owner, number of rows and columns, and size (in kilobytes).

Use this to identify:

  • Large or old datasets that may be archived or refreshed

  • Ownership and accountability of data assets

  • Data growth trends over time

Dataset size estimation in GB

This more advanced query estimates total dataset size by calculating per-row memory cost based on data types, then multiplying by row count.

User activity monitoring

Lists Tellius users along with their email addresses and names. Use this to track platform adoption, audit access, or manage licenses.

Monitor usage of natural language search by tracking what queries users submit and when. Combine with user logs to understand adoption of search and identify training opportunities.

Daily usage analytics

This query aggregates daily activity across various Tellius features such as Search, Insights, data operations, Vizpads, and user logins. Visualize this in a line chart to observe daily or weekly usage patterns.

Kaiya conversational AI usage

Track how often users engage with Kaiya

Monitoring Business View

Monitor how large each Business View is to optimize performance.

Understand which Business Views are shared with which groups. Use this to audit access control and sharing practices.

Business View ↔ Vizpad mapping

This helps you understand which Vizpads rely on which Business Views, and how many charts, KPIs, etc. each one contains. This helps you to analyze impact before modifying or deleting a BusinessView

  • Design optimization (e.g., identifying Vizpads using large or outdated views)

  • Auditing dependencies for dashboard maintenance

To merge Business View size and sharing details for offline audit or reporting, use the provided Python script to join exported .csv files and calculate sizes in GB. Useful for access control reviews or cleaning up unused assets.

Snowflake data source metadata

Track how Snowflake is used in your environment, including:

  • How many datasets are built on each connection

  • Who owns the connections

  • Which auth methods are in use

Scheduled refresh monitoring

Monitor refresh jobs across datasets, models, Vizpads, and Insights.

Estimating cluster resource handling capacity

The amount of data Tellius cluster can handle depends on two factors:

  1. Whether cluster autoscaling is enabled or disabled

  2. The number of allocated CPU cores in the cluster

The following formulas help estimate how much data your cluster can process and store efficiently.

  • numberOfCores: Total number of CPU cores in your Tellius cluster.

  • 3.125: This is the baseline GB-to-core multiplier.

🚫 Cluster autoscaling disabled

When autoscaling is turned off, the system cannot automatically add or remove resources based on workload. You must size the cluster manually to meet expected demand.

Ideal data capacity:

The 1.20 factor accounts for additional memory overhead used by background operations and system processes.

For maximum supported data with degradation:

The 1.25 multiplier allows for up to 25% more data to be handled, though performance (e.g., response time or refresh speed) may slightly degrade.

For a cluster with 64 cores:

  • Ideal capacity = 3.125 × 1.20 × 64 = 240 GB

  • Max capacity (with some degradation) = 3.125 × 1.20 × 1.25 × 64 = 300 GB

✅ Cluster autoscaling enabled

When autoscaling is enabled, the cluster can dynamically adjust resources based on demand. In this setup, system overhead is already managed, so the formula is simpler.

Ideal data capacity:

For maximum supported data with degradation:

These values are estimates, not strict limits. Actual performance may vary based on:

  • Data format (CSV vs. Parquet)

  • Number of columns

  • Query complexity

  • Number of concurrent users

Cluster sizing reference table

Here’s a simple comparison to help you estimate how much data your cluster can handle, depending on whether autoscaling is enabled or not:

# Cores
Autoscaling
Ideal capacity
Max capacity (with 25% buffer)

32

Enabled

100 GB

125 GB

32

Disabled

120 GB

150 GB

64

Enabled

200 GB

250 GB

64

Disabled

240 GB

300 GB

128

Enabled

400 GB

500 GB

128

Disabled

480 GB

600 GB

Calculated columns

This query shows all custom formulas used in Business Views.

Was this helpful?