# Setting up LLM for Kaiya

From Tellius 5.4, admins can configure which LLM (Large Language Model) Kaiya uses to interpret questions, generate insights, and trigger workflows. This is managed under **Settings → Application Settings → Kaiya**.

Tellius supports three types of LLMs:

* OpenAI
* Gemini (Google)
* Azure OpenAI

Each option comes with a specific configuration panel and only one model setup is active at a time. Here's how to configure each one:

### OpenAI

When you choose **OpenAI** from the dropdown, you’re connecting Kaiya directly to OpenAI’s hosted models like GPT-4 or GPT-3.5.

* **Base URL**: Enter the API endpoint for OpenAI. Typically, this is `https://api.openai.com/v1`. It’s the base path where all API requests are sent.
* **Model Name**: This is where you specify the exact OpenAI model you want to use. Common options include `gpt-4`, `gpt-4o`, or `gpt-3.5-turbo`. The model name must match what your API key has access to.
* **Default Headers**: If you need to send any additional headers with each API request (like org-specific tokens), you can enter them here in JSON format. Most users can leave this as `{}`.
* **API Key**: Paste your OpenAI API key here. This authenticates your requests. Make sure the key has permission to use the model you specified above.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection. Tellius will check whether it can reach OpenAI and confirm if your key and model are valid.
* **Save**: After a successful validation, you’ll be able to click on **“Save”** to store the configuration and activate this model in Kaiya.

<figure><img src="https://content.gitbook.com/content/VXyBWnsg0T2tHBl87viA/blobs/RsSLtGzAbYtm0RiTVqd2/image.png" alt="" width="563"><figcaption><p>OpenAI LLM</p></figcaption></figure>

### Gemini

When using **Gemini**, you’re connecting to Google’s LLM (such as Gemini Pro) through its API.

* **Model Name**: Enter the name of the Gemini model or deployment. For example, `gemini-pro` or a custom name provisioned in your Vertex AI setup.
* **API Key**: Provide the key you obtained from Google Cloud. This authenticates access to Gemini models.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection and API key combination.
* **Save Button**: Once validation passes, click on **“Save”** to finalize the setup.

{% hint style="info" %}
Gemini may have different rate limits or quota settings, depending on your Google Cloud setup.
{% endhint %}

<figure><img src="https://content.gitbook.com/content/VXyBWnsg0T2tHBl87viA/blobs/stbri6tByi4oLpjE9tmU/image.png" alt="" width="563"><figcaption><p>Gemini LLM</p></figcaption></figure>

### Azure OpenAI

If your organization hosts OpenAI models on Azure, select **Azure OpenAI**.

{% hint style="warning" %}
Service Principal authentication may be available in certain deployments/versions. If you don’t see an **auth method** toggle or fields for **Tenant ID, Client ID, Client Secret**, your workspace is currently set up for **API key** auth only. Please contact your admin for further details.
{% endhint %}

* **Base URL**: Enter the full URL of your Azure OpenAI resource. Example: `https://<resource-name>.openai.azure.com/`.
* **Deployment Name**: This must match the exact deployment name configured in your Azure portal. This tells Tellius which model to use.
* **Default Headers**: Provide any headers you want to send with the API requests (optional).
* **API Version**: Specify the Azure OpenAI API version to use. For example, `2024-02-15-preview`. This ensures compatibility with new features or model updates.
* **Azure Deployment Type**: Choose the type of deployment (e.g., “Chat” or “Completions”) based on what you have set up in Azure.
* **Validate**: Once all fields are filled, click on **"Validate"** to test the connection. Tellius will check whether it can reach Azure OpenAI and confirm if your key and model are valid.
* **Save**: After a successful validation, you’ll be able to click on **“Save”** to store the configuration and activate this model in Kaiya.

<figure><img src="https://content.gitbook.com/content/VXyBWnsg0T2tHBl87viA/blobs/iFeH9wJsGEGvzXMwsn9e/image.png" alt="" width="563"><figcaption><p>Azure OpenAI LLM</p></figcaption></figure>

{% hint style="info" %}
Please make sure you have your API keys/credentials ready before configuration to speed up validation.
{% endhint %}
