Connecting to Salesforce

A step-by-step guide to establish a new connection with Salesforce database

Once you select Salesforce from Data → Connect, you are presented with a form to specify your connection parameters.

Connecting to Salesforce
  1. Hostname: The network location (DNS or IP address) of your Salesforce server. Without a correct hostname, Tellius cannot establish a connection.

  2. Security Token: An extra layer of authentication required when connecting from outside trusted IP ranges or networks. Users can reset or generate a new token in their Salesforce personal settings (under “Reset My Security Token”).

  3. User: Provide the username with appropriate permissions (atleast read-access) to read data.

  4. Password: Provide the corresponding password for the User provided.

  5. Datasource Name: A user-friendly name for this connection in Tellius.

  6. Save and Browse Host: Saves the connection details and attempts to browse the host for available schemas and tables. This initiates the handshake with the Salesforce server.

If your database is behind a firewall, we display a Tellius IP address in this page that you may need to whitelist.

Using validated datasource connection details

If you’ve previously validated and saved a Salesforce connection, you can reuse its details:

Using already established connections
  1. Use validated datasource connection details: When enabled, it reveals a dropdown to choose from existing, previously configured Salesforce connections.

  2. Select datasource: Lists all pre-validated Salesforce connections. Select the one you want to reuse and all the fields will be filled automatically as configured.

  3. Browse Host: Similar to “Save and Browse Host”, but now it just navigates forward using the chosen existing connection’s parameters.

Loading tables

After establishing a connection, you will see options to load data from Salesforce tables.

Loading tables
  1. Select a table: Displays all available tables under the chosen Salesforce schema. Pick the tables you need for analysis. If there are many tables, you can narrow down your selection.

  2. Search for table: Filters the displayed tables based on your search term.

  3. Import: Imports the selected table(s) into Tellius.

Using Custom SQL

If you prefer more granular control or want to write your own SQL queries to load precisely the data you need, switch to "Custom SQL" tab.

Custom SQL window
  1. Table filter: Helps locate a particular table by name before writing your SQL.

  2. Select a table: Choose a table name to use in your custom query.

  3. Query: A field for your custom SQL statement (e.g., SELECT * FROM SYS.WRI$_DBU_CPU_USAGE).

  4. Preview: Executes the SQL query and displays a few sample rows of the data you’re about to import in the “Dataset Preview” area. Allows you to validate that the query returns the correct data before fully importing it. This helps catch syntax errors or incorrect filters early.

  5. Import: Once satisfied with the preview, click Import to load the data returned by the SQL query into Tellius.

Advanced Settings

Once you import, you’ll have the option to refine how the dataset is handled:

  1. Dataset name: Assign a valid name to your new dataset (e.g., XYZ_THRESHOLD). Names should follow the allowed naming conventions (letters, numbers, underscores, no leading underscores/numbers, no special chars/spaces).

  2. Connection Mode When the Live checkbox is selected, the queries will be fetched from the database each time, and the data will not be copied to Tellius. Live mode ensures the most up-to-date data at the cost of potential query latency.

Advanced Settings
For non-live datasets
  1. Copy to system: If enabled, copies the imported data onto Tellius’s internal storage for faster performance. Reduces dependency on the source database’s speed and network latency. Good for frequently queried datasets.

  2. Cache dataset in memory: If enabled, keeps a cached copy of the dataset in memory (RAM) for even faster query responses. Memory caching dramatically reduces query time, beneficial for dashboards and frequently accessed data.

When only one table is imported, the following options will also be displayed:

  1. Partitioning: If enabled, it splits a large dataset into smaller logical chunks (partitions). Improves performance on large tables, enabling parallel processing and faster load times. For more details, check out this dedicated page on Partitioning.

  • Partition column: The column used as a basis for partitioning.

  • Number of partitions: How many segments to break the data into. (e.g., 5 partitions)

  • Lower bound/Upper bound: Approximate value range in the partition column to evenly distribute data.

  1. Create Business View: If enabled, after loading data, you will be guided into the Business View creation stage.

Click on Load to finalize the process. After clicking Load, your dataset appears under Data → Dataset, ready for exploration, preparation, or business view configuration. Else, click on Cancel to discard the current importing process without creating the dataset.

After the dataset is created, you can navigate to "Dataset", where you can review and further refine your newly created dataset. Apply transformations, joins, or filters in the Prepare module.

Editing the SQL Load

After you have created and saved a dataset, there may be situations where you need to modify the underlying SQL query or adjust how the data is partitioned. For example, you might need to update the SQL query to include additional columns, apply a new filter, join to another table, or adjust partitioning parameters.

  1. After loading all the datasets, click on the three-dot menu of the required Salesforce dataset under Data → Datasets, and select Edit SQL load from the menu. The following window will be displayed.

Custom SQL Load
  1. Inside the dialog, you will see an interface with two toggles at the top: Query and DBTable.

  2. Choose Query if you want to enter or modify a custom SQL query directly. If your data retrieval logic involves multiple joins, in-line calculations, or advanced filters that are easier to express in SQL, the Query option is more appropriate. The Query field is where you will paste or write your updated SQL code.

  3. You would choose DBTable if you simply want to select one (or more) tables directly from your database without writing or maintaining a custom SQL query. This approach is often easier and more straightforward when you don’t need complex joins, filters, or transformations—Tellius will handle the basic data retrieval automatically.

  4. Update the SQL text in the Query section as needed. For example, you might add a filter WHERE clause, join another table, or select additional columns.

  5. After making changes, click Run Validation to ensure that the updated SQL is syntactically correct and returns data.

  6. Below the Query section, you’ll find the Partitioning (optional) toggle. Partitioning divides the dataset into smaller chunks based on a chosen numeric column. This can vastly improve performance and reduce load times on large datasets.

  7. Toggle the Partitioning switch to turn it on. For more details on Partitioning and its fields, please check out this page.

Last updated

Was this helpful?