Setting up LLM for Kaiya
Choose your desired LLM for Kaiya
Last updated
Was this helpful?
Choose your desired LLM for Kaiya
Last updated
Was this helpful?
In Tellius 5.4, admins can configure which LLM (Large Language Model) Kaiya uses to interpret questions, generate insights, and trigger workflows. This is managed under Settings → Application Settings → Kaiya.
Tellius supports three types of LLMs:
OpenAI
Gemini (Google)
Azure OpenAI
Each option comes with a specific configuration panel and only one model setup is active at a time. Here's how to configure each one:
When you choose OpenAI from the dropdown, you’re connecting Kaiya directly to OpenAI’s hosted models like GPT-4 or GPT-3.5.
Base URL: Enter the API endpoint for OpenAI. Typically, this is https://api.openai.com/v1
. It’s the base path where all API requests are sent.
Model Name: This is where you specify the exact OpenAI model you want to use. Common options include gpt-4
, gpt-4o
, or gpt-3.5-turbo
. The model name must match what your API key has access to.
Default Headers: If you need to send any additional headers with each API request (like org-specific tokens), you can enter them here in JSON format. Most users can leave this as {}
.
API Key: Paste your OpenAI API key here. This authenticates your requests. Make sure the key has permission to use the model you specified above.
Validate: Once all fields are filled, click on "Validate" to test the connection. Tellius will check whether it can reach OpenAI and confirm if your key and model are valid.
Save: After a successful validation, you’ll be able to click on “Save” to store the configuration and activate this model in Kaiya.
When using Gemini, you’re connecting to Google’s LLM (such as Gemini Pro) through its API.
Model Name: Enter the name of the Gemini model or deployment. For example, gemini-pro
or a custom name provisioned in your Vertex AI setup.
API Key: Provide the key you obtained from Google Cloud. This authenticates access to Gemini models.
Validate: Once all fields are filled, click on "Validate" to test the connection and API key combination.
Save Button: Once validation passes, click on “Save” to finalize the setup.
If your organization hosts OpenAI models on Azure, select Azure OpenAI.
Base URL: Enter the full URL of your Azure OpenAI resource. Example: https://<resource-name>.openai.azure.com/
.
Deployment Name: This must match the exact deployment name configured in your Azure portal. This tells Tellius which model to use.
Default Headers: Provide any headers you want to send with the API requests (optional).
API Version: Specify the Azure OpenAI API version to use. For example, 2024-02-15-preview
. This ensures compatibility with new features or model updates.
Azure Deployment Type: Choose the type of deployment (e.g., “Chat” or “Completions”) based on what you have set up in Azure.
Validate: Once all fields are filled, click on "Validate" to test the connection. Tellius will check whether it can reach Azure OpenAI and confirm if your key and model are valid.
Save: After a successful validation, you’ll be able to click on “Save” to store the configuration and activate this model in Kaiya.