Tellius
Tellius 5.5
Tellius 5.5
  • 🚩Getting Started
    • 👋Say Hello to Tellius
      • Glossary
      • Tellius 101
      • Navigating around Tellius
    • ⚡Quick Start Guides
      • Search
      • Vizpads (Explore)
      • Insights (Discover)
    • ✅Best Practices
      • Search
      • Vizpads (Explore)
      • Insights (Discover)
      • Predict
      • Data
    • ⬇️Initial Setup
      • Tellius architecture
      • System requirements
      • Installation steps for Tellius
      • Customizing Tellius
    • Universal Search
    • 🏠Tellius Home Page
    • ❓FAQs
      • Kaiya Conversational AI
      • Data Preparation FAQs
      • Environment FAQs
      • Search FAQs
      • Vizpads FAQs
      • Data Caching FAQs
      • Embedding FAQs
      • Insights FAQs
  • Kaiya
    • ♟️Understanding AI Agents & Agentic Flows
      • Glossary
      • Composer
      • 🗝️Triggering an agentic workflow
      • The art of possible
    • 🤹Kaiya conversational AI
      • Triggering Insights with "Why" questions
      • Mastering Kaiya conversational AI
      • 📒Kaiya Learnings
      • Kaiya Terms of Service
      • Best practices
  • 🔍Search
    • 👋Get familiar with our Search interface
    • 🤔Understanding Tellius Search
    • 📍Search Guide
    • 🚀Executing a search query
      • Selecting a Business View
      • Typing a search query
      • Constructing effective search queries
      • Marketshare queries
    • 🔑Analyzing search results
      • Understanding search results
      • Search Inspector
      • Time taken to execute a query
      • Interacting with the resulting chart
    • 📊Know your charts in Tellius
      • Understanding Tellius charts
      • Variations of a chart type
      • Building charts from Configuration pane
      • List of chart-specific fields
      • Adding columns to fields in Configuration pane
      • Absolute and percentage change aggregations
      • Requirements of charts
      • Switching to another chart
      • Formatting charts
      • Advanced Analytics
      • Cumulative line chart
    • 🧑‍🏫Help Tellius learn
    • 🕵️‍♂️Search history
    • 🎙️Voice-driven search
    • 🔴Live Query mode
  • 📈Vizpads (Explore)
    • 🙋Meet Vizpads!
    • 👋Get familiar with our Vizpads
    • #️⃣Measures, dimensions, date columns
    • ✨Creating Vizpads
    • 🌐Applying global filters
      • Filters in multi-BV Vizpads
      • Filters using common columns
    • 📌Applying local filters
    • 📅Date picker in filters
      • Customizing the calendar view
    • ✅Control filters
      • Multi-select list
      • Single-select list
      • Range slider
      • Dropdown list
    • 👁️Actions in View mode
      • Interacting with the charts
      • Exporting tables
    • 📝Actions in Edit mode
      • 🗨️Viz-level actions
      • Copy to Clipboard
    • 🔧Anomaly management for line charts
      • Instance level
      • Vizpad level
      • Chart level
    • ⏳Time taken to load a chart
      • Instance level
      • Vizpad level
      • Chart level
    • ♟️Working with sample datasets
    • 🔁Swapping Business View of charts
      • Swapping only the current Vizpad
      • Swapping multiple objects
      • Configuring the time of swap
    • 🤖Explainable AI charts
  • 💡Insights (Discover)
    • 👋Get familiar with our Insights
    • ❓Understanding the types of Insights
    • 🕵️‍♂️Discovery Insights
      • Impact Calculation for Top Contributors
    • ➕How to create new Insights
      • 🔛Creating Discovery Insight
      • 🔑Creating Key Driver Insights
      • 〰️Creating Trend Insights
      • 👯Creating Comparison Insights
    • 🧮The art of selecting columns for Insights
      • ➡️How to include/exclude columns?
  • 🔢Data
    • 👋Get familiar with our Data module
    • 🥂Connect
    • 🪹Create new datasource
      • Connecting to Oracle database
      • Connecting to MySQL database
      • Connecting to MS SQL database
      • Connecting to Postgres SQL database
      • Connecting to Teradata
      • Connecting to Redshift
        • Access S3 Data with Redshift Spectrum
      • Connecting to Hive
      • Connecting to Azure Blob Storage
      • Connecting to Spark SQL
      • Connecting to generic JDBC
      • Connecting to Salesforce
      • Connecting to Google cloud SQL
        • Connecting to a PostgreSQL cloud SQL instance
        • Connecting to an MSSQL cloud SQL instance
        • Connecting to a MySQL Cloud SQL Instance
      • Connecting to Amazon S3
      • Connecting to Google BigQuery
        • Steps to connect to a Google BigQuery database
      • Connecting to Snowflake
        • OAuth support for Snowflake
        • Integrating Snowflake with Azure AD via OAuth
        • Integrating Snowflake with Okta via OAuth
        • Azure PrivateLink
        • AWS PrivateLink
        • Best practices
      • Connecting to Databricks
      • Connecting to Databricks Delta Lake
      • Connecting to an AlloyDB Cluster
      • Connecting to HDFS
      • Connecting to Looker SQL Interface
      • Loading Excel sheets
      • 🚧Understanding partitioning your data
    • ⏳Time-to-Live (TTL) and Caching
    • 🌷Refreshing a datasource
    • 🪺Managing your datasets
      • Swapping datasources
    • 🐣Preparing your datasets
      • 🤾Actions that can be done on a dataset
      • Data Pipeline
      • SQL code snippets
      • ✍️Writeback window
      • 🧩Editing Prepare → Data
      • Handling null or mismatched values
      • Metadata view
      • List of icons and their actions
        • Functions
        • SQL Transform
        • Python Transform
        • Standard Aggregation
        • Creating Hierarchies
      • Dataset Scripting
      • Fusioning your datasets
      • Scheduling refresh for datasets
    • 🐥Preparing your Business Views
      • 🌟Create a new Business View
      • Creating calculated columns
      • Creating dynamic parameters
      • Scheduling refresh for Business Views
      • Setting up custom calendars
      • Custom Calendars for Live Connections
    • Tellius Engine: Comparison of In-Memory vs. Live Mode
    • User roles and permissions
    • Refresh pipeline
  • Feed
    • 📩What is a Feed in Tellius?
    • ❗Alerts on the detection of anomalies
    • 📥Actions done on a tracking Feed
    • 🖲️Track a new metric
  • Assistant
    • 💁Introducing Tellius Assistant
    • 🎤Voice-based Assistant
    • 💬Interacting with Assistant
    • ↖️Selecting Business View
  • Embedding Tellius
    • What you should know before embedding
    • Embedding URL
      • 📊Embedding Vizpads
        • Apply and delete filters
        • Vizpad-related actionTypes
        • Edit, save, and share a Vizpad
        • Keep, remove, drill sections
        • Adding a Viz to a Vizpad
        • Row-level policy filters
      • 💡Embedding Insights
        • Creating and Viewing Insights
      • 🔎Embedding Search
        • Search query execution
      • Embedding Assistant
      • 🪄Embedding Kaiya
      • Embedding Feed
  • API
    • Insights APIs
    • Search APIs
    • Authentication API (Login API)
  • ✨What's New
    • Release 5.5
    • Release 5.4
      • Patches 5.4.0.1 to 5.4.0.4
      • Patch 5.4.0.5
      • Patch 5.4.1
      • Patches 5.4.1.1 and 5.4.1.2
    • Release 5.3
      • Patch 5.3.1
      • Patch 5.3.2
      • Patch 5.3.3
    • Release 5.2
      • Patch 5.2.1
      • Patch 5.2.2
    • Release 5.1
      • Patch 5.1.1
      • Patch 5.1.2
      • Patch 5.1.3
    • Release 5.0
      • Patch 5.0.1
      • Patch 5.0.2
      • Patch 5.0.3
      • Patch 5.0.4
      • Patch 5.0.5
    • Release 4.3 (Fall 2023)
      • Patch 4.3.1
      • Patch 4.3.2
      • Patch 4.3.3
      • Patch 4.3.4
    • Release 4.2
      • Patch 4.2.1
      • Patch 4.2.2
      • Patch 4.2.3
      • Patch 4.2.4
      • Patch 4.2.5
      • Patch 4.2.6
      • Patch 4.2.7
    • Release 4.1
      • Patch 4.1.1
      • Patch 4.1.2
      • Patch 4.1.3
      • Patch 4.1.4
      • Patch 4.1.5
    • Release 4.0
Powered by GitBook

© 2025 Tellius

On this page
  • Dataset Listing Columns
  • Three-dot menu options

Was this helpful?

  1. Data

Managing your datasets

Easily manage your datasets

PreviousRefreshing a datasourceNextSwapping datasources

Last updated 5 months ago

Was this helpful?

The Datasets page under Data → Dataset provides a unified overview of all your datasets—highlighting each dataset’s type, creation date, last refresh time, underlying data source, refresh schedule, and owner. This structured view makes it easy to:

  • Track data freshness and relevance.

  • Identify who is responsible for a dataset.

  • Organize and maintain data sources effectively.

Dataset Listing Columns

  1. Type: The data source or file type, such as CSV or Salesforce. For instance, a CSV file means it was uploaded from a flat file, whereas a Salesforce icon indicates a direct connection to your Salesforce instance.

  2. Dataset name: The user-defined name for the dataset. Near the dataset name, you can find an exclamation point showing that no Business View has been created for the dataset yet. Clicking on the exclamation point will redirect you to Data → Business Views.

  3. Date created: The timestamp (date and time) when the dataset was originally created in Tellius.

  4. Last refreshed: The timestamp of the most recent data load or refresh event. Indicates how current the dataset is relative to the source system.

  5. Datasource: The name or identifier of the underlying data source. Links each dataset to its original data source or connector. For flat files, this may appear as “-” if no direct source is referenced or if the file is static.

  6. Schedule: Whether the dataset has a scheduled refresh interval (e.g., “Weekly”) or if no schedule exists (“-”). Tells you the refresh frequency for dataset updates—whether it’s daily, weekly, or a custom schedule.

  7. Owner: The user who created the dataset or has primary control over it (e.g., “superUser”).

Use sorting and searching options together for efficient navigation in environments with dozens or hundreds of connections.

  • Search bar: Allows you to type a keyword (e.g., part of a dataset name) to quickly filter the displayed datasets.

  • Filter dropdown: Allows you to filter the visible connections. You can choose to view all connections, connections created by you, or connections shared with you.

  • Sort menu: Enables you to sort connections by title, type, or recently created.

To the far right of each connection row is a three-dot kebab menu.

Three-dot menu options

When you click on the three-dot icon at the far right of any dataset row, a menu appears with various dataset management actions. Here’s what each option does:

  1. Rename: Allows you to change the dataset’s display name.

  1. Make a copy: Duplicates the selected dataset. The following window will be displayed.

  • Cache dataset in memory? If enabled, the newly created copy of the dataset will be stored (cached) in memory for faster querying and analysis. It improves performance when you repeatedly query this dataset. This generates a separate dataset, leaving the original unchanged. However, it uses additional memory resources.

  • Count (No. of records) / Percent (0 to 100%): Lets you specify how much of the original dataset to include in the copy—either by a fixed number of rows (Count) or by a percentage of rows.

You can create a smaller, representative sample dataset (e.g., “10,000 records” or “20% of the original dataset”) for testing, quick prototyping, or machine learning workflows that don’t require the full dataset.

  • Dataset name: A user-defined, unique name for the new copied dataset.

  • Click on Submit to create the new dataset copy or click Cancel to discard.

  1. Share: Allows you to share the selected dataset. Provide the relevant username or email ID in the following window and select the permissions.

  1. Add to Business View: Redirects you to Data → Business View where you can apply the required changes and create a Business View out of the dataset, which is a curated semantic layer that defines how columns are grouped, named, and formatted.

  2. Edit SQL Load:

After you have created and saved a dataset, there may be situations where you need to modify the underlying SQL query or adjust how the data is partitioned. Ideal for making quick adjustments—such as updating filters, joins, or calculations—without needing to recreate the entire dataset. Great for advanced transformations or adapting to new data structures.

After loading all the datasets, click on the three-dot menu of the required MySQL dataset under Data → Datasets, and select Edit SQL load from the menu. The following window will be displayed.

  • Inside the dialog, you will see an interface with two toggles at the top: Query and DBTable.

  • Choose Query if you want to enter or modify a custom SQL query directly. If your data retrieval logic involves multiple joins, in-line calculations, or advanced filters that are easier to express in SQL, the Query option is more appropriate. The Query field is where you will paste or write your updated SQL code.

  • You would choose DBTable if you simply want to select one (or more) tables directly from your database without writing or maintaining a custom SQL query. This approach is often easier and more straightforward when you don’t need complex joins, filters, or transformations—Tellius will handle the basic data retrieval automatically.

  • Update the SQL text in the Query section as needed. For example, you might add a filter WHERE clause, join another table, or select additional columns.

  • After making changes, click Run Validation to ensure that the updated SQL is syntactically correct and returns data.

  • Below the Query section, you’ll find the Partitioning (optional) toggle. Partitioning divides the dataset into smaller chunks based on a chosen numeric column. This can vastly improve performance and reduce load times on large datasets.

The “Edit SQL load” option in Tellius is primarily available for datasets that were originally created or ingested via a custom SQL query against a SQL-based data source (e.g., MySQL, MSSQL, PostgreSQL).

For CSV, Excel uploads, certain connectors (e.g., Salesforce) or data-lake integrations, there is no SQL layer involved, so “Edit SQL load” does not apply.

  1. Archive: Moves the dataset into an archived state, indicating it’s no longer actively used. This frees up the main workspace and helps keep active lists organized while still retaining access to older datasets if needed in the future.

  • The following window appears when you choose to archive a particular dataset, providing a final confirmation step.

  • A table lists the dataset and any related objects (e.g., Business Views, Vizpads etc.) that will no longer be accessible once the dataset is archived. Along with the name and type, it also lists when the object was last modified and the user who created the object.

  • You must type the phrase "Archive this dataset" to prevent accidental archiving.

  • Before archiving, you can also Download the list of impacted objects as a CSV file.

  • The archived datasets can be found towards the end of the Datasets list found in the left pane under Data → Prepare.

The data of archived datasets cannot be viewed. You can either unarchive or delete the archived datasets.

  1. Export Dataset: Exports the dataset in .zip format.

  2. Delete: The following window will be displayed:

  • The window clarifies that all downstream content derived from it—datasets, Projects, Vizpads, Insights, Models, and Business Views—will also be removed, preventing accidental deletions.

  • Prompts you to type the confirmation sentence “I understand that I will lose all content” exactly, acting as a final safeguard. You must explicitly confirm that you’re okay with losing all connected content.

  • Click on Delete to permanently remove the dataset and all listed objects (Business Views, Insights, etc.) or click on Cancel to discard. Once deleted, this action is irreversible.

Create New: Initiates the process of creating a new dataset. Clicking it loads the “New Datasource” selection screen, where you can pick from various connectors. For more details, check out page.

Toggle the Partitioning switch to turn it on. For more details on Partitioning and its fields, please check out page.

Swap Datasource: Replaces the current data source with another compatible source for seamless migration between data environments (e.g., switching from a development DB to a production DB). For more details, check out page.

🔢
🪺
this
this
this
Data → Dataset
Managing a dataset
Rename a dataset
Duplicating a dataset
Sharing dataset
Custom SQL Load
Archive a dataset
Archived datasets
Deleting a dataset