Connecting to Databricks Delta Lake
Seamlessly browse, ingest, and analyze Delta Lake tables within Tellius
Last updated
Was this helpful?
Seamlessly browse, ingest, and analyze Delta Lake tables within Tellius
Last updated
Was this helpful?
Databricks Delta Lake is a storage layer that brings ACID transactions to Apache Spark and big data workloads, commonly hosted on Databricks. By connecting Tellius to your Databricks cluster, you can securely load data from Delta tables into Tellius for analytics, AI-driven insights, and more.
URL: The connection string or endpoint pointing to your Databricks workspace (e.g., jdbc:spark://<workspace-url>:443/;transportMode=http;ssl=1;httpPath=sql/protocolv1/...
).
Datasource Name: A user-friendly name for this connection in Tellius.
Save and Browse Host: Saves the connection details and attempts to browse the host for available schemas and tables. If the connection is successful, you can select tables or write custom queries for loading Delta data into Tellius.
If youโve previously validated and saved a Delta Lake connection, you can reuse its details:
Use validated datasource connection details: When enabled, it reveals a dropdown to choose from existing, previously configured Delta Lake connections.
Select datasource: Lists all pre-validated Delta Lake connections. Select the one you want to reuse and all the fields will be filled automatically as configured.
Browse Host: Similar to โSave and Browse Hostโ, but now it just navigates forward using the chosen existing connectionโs parameters.
After establishing a connection, you will see options to load data from Delta Lake tables.
Select a table: Displays all available tables under the chosen Delta Lake service/schema. Pick the tables you need for analysis. If there are many tables, you can narrow down your selection.
Search for table: Filters the displayed tables based on your search term.
Import: Imports the selected table(s) into Tellius.
If you prefer more granular control or want to write your own SQL queries to load precisely the data you need, switch to "Custom SQL" tab.
Table filter: Helps locate a particular table by name before writing your SQL.
Select a table: Choose a table name to use in your custom query.
Query: A field for your custom SQL statement (e.g., SELECT * FROM SYS.WRI$_DBU_CPU_USAGE
).
Preview: Executes the SQL query and displays a few sample rows of the data youโre about to import in the โDataset Previewโ area. Allows you to validate that the query returns the correct data before fully importing it. This helps catch syntax errors or incorrect filters early.
Import: Once satisfied with the preview, click Import to load the data returned by the SQL query into Tellius.
Once you import, youโll have the option to refine how the dataset is handled:
Dataset name: Assign a valid name to your new dataset (e.g., XYZ_THRESHOLD
). Names should follow the allowed naming conventions (letters, numbers, underscores, no leading underscores/numbers, no special chars/spaces).
Connection Mode When the Live checkbox is selected, the queries will be fetched from the database each time, and the data will not be copied to Tellius. Live mode ensures the most up-to-date data at the cost of potential query latency.
When Live mode is enabled, then only Create Business View option will be displayed.
Copy to system: If enabled, copies the imported data onto Telliusโs internal storage for faster performance. Reduces dependency on the source databaseโs speed and network latency. Good for frequently queried datasets.
Cache dataset in memory: If enabled, keeps a cached copy of the dataset in memory (RAM) for even faster query responses. Memory caching dramatically reduces query time, beneficial for dashboards and frequently accessed data.
When only one table is imported, the following options will also be displayed:
Partition column: The column used as a basis for partitioning.
Number of partitions: How many segments to break the data into. (e.g., 5 partitions)
Lower bound/Upper bound: Approximate value range in the partition column to evenly distribute data.
Create Business View: If enabled, after loading data, you will be guided into the Business View creation stage.
Click on Load to finalize the process. After clicking Load, your dataset appears under Data โ Dataset, ready for exploration, preparation, or business view configuration. Else, click on Cancel to discard the current importing process without creating the dataset.
After the dataset is created, you can navigate to "Dataset", where you can review and further refine your newly created dataset. Apply transformations, joins, or filters in the Prepare module.
Partitioning: If enabled, it splits a large dataset into smaller logical chunks (partitions). Improves performance on large tables, enabling parallel processing and faster load times. For more details, check out dedicated page on Partitioning.