# Setting up LLM for Kaiya

From Tellius 5.4, admins can configure which LLM (Large Language Model) Kaiya uses to interpret questions, generate insights, and trigger workflows. This is managed under **Settings → Application Settings → Kaiya**.

Tellius supports three types of LLMs:

* OpenAI
* Gemini (Google)
* Azure OpenAI

Each option comes with a specific configuration panel and only one model setup is active at a time. Here's how to configure each one:

### OpenAI

When you choose **OpenAI** from the dropdown, you’re connecting Kaiya directly to OpenAI’s hosted models like GPT-4 or GPT-3.5.

* **Base URL**: Enter the API endpoint for OpenAI. Typically, this is `https://api.openai.com/v1`. It’s the base path where all API requests are sent.
* **Model Name**: This is where you specify the exact OpenAI model you want to use. Common options include `gpt-4`, `gpt-4o`, or `gpt-3.5-turbo`. The model name must match what your API key has access to.
* **Default Headers**: If you need to send any additional headers with each API request (like org-specific tokens), you can enter them here in JSON format. Most users can leave this as `{}`.
* **API Key**: Paste your OpenAI API key here. This authenticates your requests. Make sure the key has permission to use the model you specified above.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection. Tellius will check whether it can reach OpenAI and confirm if your key and model are valid.
* **Save**: After a successful validation, you’ll be able to click on **“Save”** to store the configuration and activate this model in Kaiya.

<figure><img src="https://content.gitbook.com/content/s16h5onryWtbaHwBa10b/blobs/nCDwvPwuq0r63USuO2pm/image.png" alt="" width="563"><figcaption><p>OpenAI LLM</p></figcaption></figure>

### Gemini

When using **Gemini**, you’re connecting to Google’s LLM (such as Gemini Pro) through its API.

* **Model Name**: Enter the name of the Gemini model or deployment. For example, `gemini-pro` or a custom name provisioned in your Vertex AI setup.
* **API Key**: Provide the key you obtained from Google Cloud. This authenticates access to Gemini models.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection and API key combination.
* **Save Button**: Once validation passes, click on **“Save”** to finalize the setup.

{% hint style="info" %}
Gemini may have different rate limits or quota settings, depending on your Google Cloud setup.
{% endhint %}

<figure><img src="https://content.gitbook.com/content/s16h5onryWtbaHwBa10b/blobs/WMhuUu38y5EJ3hEFGofr/image.png" alt="" width="563"><figcaption><p>Gemini LLM</p></figcaption></figure>

### Azure OpenAI

If your organization hosts OpenAI models on Azure, select **Azure OpenAI**.

{% hint style="warning" %}
Service Principal authentication may be available in certain deployments/versions. If you don’t see an **auth method** toggle or fields for **Tenant ID, Client ID, Client Secret**, your workspace is currently set up for **API key** auth only. Please contact your admin for further details.
{% endhint %}

* **Base URL**: Enter the full URL of your Azure OpenAI resource. Example: `https://<resource-name>.openai.azure.com/`.
* **Deployment Name**: This must match the exact deployment name configured in your Azure portal. This tells Tellius which model to use.
* **Default Headers**: Provide any headers you want to send with the API requests (optional).
* **API Version**: Specify the Azure OpenAI API version to use. For example, `2024-02-15-preview`. This ensures compatibility with new features or model updates.
* **Azure Deployment Type**: Choose the type of deployment (e.g., “Chat” or “Completions”) based on what you have set up in Azure.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection. Tellius will check whether it can reach Azure OpenAI and confirm if your key and model are valid.
* **Save**: After a successful validation, you’ll be able to click on **“Save”** to store the configuration and activate this model in Kaiya.

<figure><img src="https://content.gitbook.com/content/s16h5onryWtbaHwBa10b/blobs/2on0mkJFRuFWXOuHEhQh/image.png" alt="" width="563"><figcaption><p>Azure OpenAI LLM</p></figcaption></figure>

{% hint style="info" %}
Please make sure you have your API keys/credentials ready before configuration to speed up validation.
{% endhint %}

#### Bedrock (from Tellius v6.0)

* **Reference Name:** A user-friendly label used in the “Assign the LLM Providers” dropdowns.
* **LLM Model (provider):** Select Bedrock.
* **Model Parameter:** JSON/text block with model settings (e.g., model/model\_id, temperature, max tokens, optional base URL/region if applicable). Used by Kaiya to call your chosen Bedrock model.
* **Default Header (Optional):** Any additional HTTP headers your environment requires (leave {} if not needed).
* **Bedrock Authentication Type:** Choose how Kaiya should authenticate:
* **Bedrock API Key:** Enter a Bedrock API key to authorize requests.
* **AWS Access/Secret Keys:** Provide Access Key ID and Secret Access Key to sign Bedrock requests via AWS credentials.

Click on **Validate** to verify the credentials and parameters before saving.

<figure><img src="https://1424959359-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fs16h5onryWtbaHwBa10b%2Fuploads%2Fu1hk3x5HAE8GFpZEANbw%2Fimage.png?alt=media&#x26;token=15615701-ee07-4a73-ae11-b5e70d56a21f" alt="" width="375"><figcaption></figcaption></figure>

#### Assign which config powers each capability

In **Assign the LLM Providers**, pick a saved configuration for:

* **General Intelligence:** Everyday natural language tasks (SRL), summarization, straightforward prompts.
* **Code Intelligence:** Technical tasks (Text2SQL, Python, code reasoning).
* **Reasoning Intelligence:** Agentic workflows, multi-step planning, Deep Insights mode.

<figure><img src="https://1424959359-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fs16h5onryWtbaHwBa10b%2Fuploads%2FqJQQikaTNjkpzKITRjYC%2Fimage.png?alt=media&#x26;token=9a647a4a-64df-4a1b-8d13-60bb7584f591" alt=""><figcaption></figcaption></figure>

{% hint style="warning" %}
You can add as many LLM configurations as you need (for different models/regions/temperature profiles). There must be at least one saved configuration to assign.
{% endhint %}
