Kaiya
This page in Settings → Advanced → Connectors → Kaiya is where administrators configure what Kaiya can do across the platform and how it behaves for end users. From this page you can enable or hide Kaiya, turn on conversational and agentic capabilities, configure voice input, allow web search as an additional information source, and connect Kaiya to observability tooling (LangSmith). These settings directly affect user-facing experiences in Vizpads, Search, Feed, Insights, and the Kaiya module.

Tellius Kaiya
This top-level toggle turns Kaiya on or off across the platform.
On: Kaiya experiences can appear in supported surfaces (such as Vizpads, Search, Feed, and Insights), and Kaiya features that are enabled below can be used by end users.
Off: The Kaiya module is hidden and Kaiya-related UI entry points are removed. Any Kaiya summaries and Kaiya-specific interactions will not be available to users.
Position of Kaiya Summaries in Search
This dropdown controls where Kaiya’s summary appears relative to charts in Search.
Above the chart: The summary is placed before the visualization, which is helpful when you want narrative-first.
Below the chart: The summary appears after the visualization (visual-first).
Kaiya Conversational AI
This group controls the interactive Kaiya conversational experience (the Kaiya module and conversational workflows).
Enable Kaiya Conversational AI
This toggle enables the Kaiya conversational experience. When enabled, users can interact with Kaiya in a chat-style workflow (subject to the additional capabilities enabled below). If you disable it, Kaiya’s conversational UI and conversational workflows are not available, even if Kaiya is enabled globally.
Hide Kaiya from UI Only
This toggle hides Kaiya’s in the left side panel without necessarily requiring you to fully turn Kaiya off at the tenant level.
Use this when you want to temporarily remove Kaiya from user-facing pages (for example, during staged rollouts or internal testing).
If your goal is to fully disable Kaiya functionality everywhere, use the global “Enable Kaiya” toggle instead.
Enable Kaiya Auto BusinessView Assistant
When enabled, users can ask questions without selecting a Business View first. Kaiya will attempt to choose the most relevant Business View for the question.
This is helpful when business users do not know which dataset or Business View to pick
This is less ideal when your environment requires strict governance and users must explicitly choose the governed dataset for each question.
Enable Kaiya Agentic
This enables Agent Mode / Deep Insight/ agentic analysis features. When it is on, users can access advanced workflows such as the Agent Library and ask multi-step, multi-turn, complex questions that requires agents.
Enable this when you want Kaiya to go beyond single-step question answering and support multi-step reasoning workflows.
Voice Assistant
Start having two-way conversations with Kaiya. It accepts voice input which is then processed by Kaiya as a standard query. In turn, Kaiya responds in voice, and it can catch all your pauses, fillers, and mid-sentence direction changes. Your questions and Kaiya responses will be transcribed to text.
Enable Voice Assistant
Turns voice input on or off for Kaiya. If enabled, users can use two-way voice-driven interactions.
Voice Assistant Type
Select the transcription provider (OpenAI or DeepGram) used for voice input.

Choose a provider based on your organization’s requirements for:
Accuracy and language support
Compliance and data handling policies
Cost and rate limits
API Key
Provide the API key for the selected voice provider (for example, DeepGram). This key authorizes Kaiya to call the provider’s transcription service.
Deepgram Configuration
This field is used to pass provider-specific configuration (for example, DeepGram settings). This is a JSON-style configuration area.
Use it to fine-tune behavior such as:
Domain-specific recognition (for example, product names or business terminology)
Keyword biasing or keyterms for higher accuracy on your organization’s vocabulary
Unstructured and Conversational Guardrails
Enable Kaiya Unstructured
Enables Kaiya to work with unstructured sources (non-dataset content) such as documents or call transcripts, depending on your connector setup.
Enable this when you want Kaiya to answer questions that reference documents and text-heavy sources.
Enable Kaiya Conversation Terms and Conditions
When enabled, Kaiya will display a window with the terms and conditions. Users must accept terms the first time they open Kaiya.
Enable Help / Unknown Questions Handling
Controls how Kaiya behaves when users ask off-topic, casual, or unsupported questions.
If enabled, Kaiya responds gracefully, explains limitations, and guides the user back to supported business/analytics questions.
If disabled, Kaiya remains strict and focuses only on supported analytics and business queries.
Web Search
When Enable Kaiya Web Search is turned on, Kaiya can use web search augmentation to improve answers only when a question depends on external, time-sensitive context. This capability is designed to ground and parameterize analytics, not to behave like a general-purpose internet research assistant.
Enable Kaiya Web Search
Turns on Kaiya’s ability to use web search where supported.
Enable this when:
You want Kaiya to supplement answers with public information.
Your use cases benefit from external context (industry terms, general definitions, publicly available references).
Web Search API Key / Credential
When web search is enabled, the page includes a credential field (masked in the UI). Use this to provide the API key or credential required by your configured web search provider.
LangSmith Integration (Observability)
LangSmith integration is used for monitoring, tracing, and evaluation of LLM interactions. This is typically used by admins and engineering teams to diagnose quality issues, measure performance, and review execution traces.
Enable Kaiya LangSmith Integration
Turns LangSmith integration on or off.
Enable this when:
You need traceability into Kaiya’s execution for debugging and quality evaluation.
You are actively tuning prompts, workflows, or agent behavior and need observability.
LangSmith Endpoint (URL): Provide the LangSmith API endpoint URL.
Project Name: Provide the LangSmith project identifier. This is used to organize traces and runs.
LangSmith API Key: Provide the LangSmith API key. This authorizes Kaiya to send traces and metadata to LangSmith.
Confirm what information is captured (prompts, responses, tool calls, metadata) and align it with your organization’s policies.

Analytics Mode
This decides how Kaiya interprets user questions.
Auto lets Kaiya choose the best method for each query. Good for mixed audiences and mixed data sources.
SQL forces a Text-to-SQL style experience. Kaiya will generate warehouse-native SQL against your Business Views.
SRL: Kaiya will break the sentence into intent, metric, dimension, filter, and time. Best when: you have Query Learnings with intent defined, you want Kaiya to understand your org’s phrasing (“top 5 accounts dropping”, “market share”, “customers due for refill”), or users are very conversational.
Choose SQL when your data lives in supported warehouses and you want predictable, governed queries. Choose SRL when users talk in domain language and you have query learnings with intent. Choose Auto when your audience is mixed.
When to use which?
Use SQL when you want deterministic, governed, query-like answers and your data source is supported.
Use SRL when you want high language tolerance, intent reuse, phrase learning, and smarter clarifications.
Use Auto when you don’t want to choose; Kaiya picks.
Clarification Mode
This controls how often Kaiya stops to ask the user something.
None: Kaiya runs with the context it has and does not ask follow-up. Good for demos and power users. Low: Kaiya asks only when the query truly can’t be executed (missing metric, no time field). Medium: Kaiya will ask if the ambiguity can change the answer, for example “growth” could mean % or absolute. High: Kaiya resolves every low-confidence part before answering. Best for C-level / regulated use cases where wrong ≫ slow.
The disambiguation is depth-based, not count-based. Higher levels improve accuracy but add one or two questions in the flow.
Prompt Variation
This tells Kaiya which prompt pack to use for a given LLM setup.
Default – when there are no customizations.
GPT-5 / Sonnet – use when your LLM configuration has model-specific prompts (for example, Bedrock vs Azure OpenAI) and you want phrasing, tool-calling style, or guardrails optimized for that model.
This dropdown ties directly to the Kaiya → Prompts page (next section): whatever you define there becomes selectable here.
Last updated
Was this helpful?