How Kaiya remembers your context
Kaiya maintains a short-term memory system that tracks context across every turn in your conversation. This means follow-up questions work naturally; you do not need to repeat setup, re-state filters, or re-specify the scope of your analysis when asking the next question. Kaiya remembers what you have already established and builds on it.
What Kaiya remembers?
Kaiya's memory tracks four categories of context across your conversation:
Conversation context. Kaiya tracks the metrics, dimensions, filters, and entities from earlier in the conversation. If you asked about TRx for Product X in the Northeast, and then ask "How about the Southeast?", Kaiya knows you mean TRx for Product X. The metric and product are carried forward; only the geographic filter changes. You do not need to re-specify everything from scratch.
Web search anchors. When Kaiya retrieves an anchor from the web (as described in the Web Search section), it stores that anchor in short-term memory with a freshness-based TTL. High-freshness anchors like stock prices refresh quickly. Medium-freshness anchors like policy effective dates persist longer. Historical events persist for the full conversation. This ensures follow-up questions use the same dates and scope as the original question without Kaiya needing to re-search or re-estimate them.
Cross-capability context. Kaiya retains context when you switch between different Kaiya capabilities within the same conversation. If you run a Deep Insight and then ask "Now break that down by district", Kaiya carries the analytical context forward from the Deep Insight into the follow-up query. If you run an analytics query and then ask for a summary, Kaiya includes the results from that query in the summary. Context flows seamlessly across Deep Insights, direct analytics queries, metadata questions, and summaries within the same thread.
Implicit definitions. Kaiya also tracks the implicit analytical frame you have established through your questions. This includes the scope you are working in (such as a specific region, payer segment, or store cluster), the comparison frame (such as pre versus post, forecast versus actuals, or year-over-year), and the business logic for how metrics are being calculated or interpreted.
How is this different from basic follow-up features?
Basic follow-up features in most analytics tools carry forward simple input parameters: the product you selected, the time range you specified. They work for narrow, predictable queries. Kaiya carries forward computed context, not just input parameters. This distinction is important because it enables three types of follow-ups that basic tools cannot handle:
References to prior results. If Kaiya showed you a chart, it remembers the specific output. You can ask "Which region had the biggest drop?" and Kaiya references the data from the prior result to answer the question. You do not need to re-run the analysis or re-describe the chart.
Anchors from external lookups. If Kaiya searched the web for a policy date or an announcement date, it remembers that date for follow-ups. You do not need to re-trigger the lookup or manually specify the date in your next question.
Multi-step reasoning chains. If Kaiya ran a multi-step analysis (for example, an SQL query, then Python processing, then a visualization), it remembers the intermediate steps. Follow-up questions can reference any part of that chain. You can ask about the raw data, the computed output, or the visualization, and Kaiya knows which part of the chain you are referring to.
Structured memory scaffolding
Under the hood, Kaiya's memory system is built on three core memory types that provide structure to how context is stored and retrieved:
Ontological memory stores definitions, relationships, synonyms, and formulas related to your data. This helps Kaiya understand that "revenue" and "sales" might mean the same thing in your organization, that a specific calculated column uses a particular formula, or that two fields have a parent-child relationship. Ontological memory ensures Kaiya interprets your terminology consistently throughout the conversation.
Instructional memory stores user preferences, administrator-defined constraints, and business rules. For example, a rule like "always exclude internal test accounts from patient counts" or a formatting preference like "show currency values in thousands" can be stored in instructional memory and applied automatically whenever relevant. These act as persistent guardrails that keep Kaiya's outputs aligned with your organizational standards.
Reference memory stores results from external lookups (including web search anchors) and prior analytical outputs. This enables Kaiya to reuse a date it looked up from the web earlier in the conversation.
How Kaiya retrieves relevant memory?
Kaiya retrieves the right context from memory in two ways:
Always-on rules (deterministic overlays). Some saved context is automatically included every time Kaiya runs in a specific situation. For example, when you are working with a particular Business View, Kaiya may always apply certain filters, definitions, or business rules that have been configured for that Business View.
Smart lookup (semantic retrieval). For context that is not always relevant, Kaiya searches its saved memory and retrieves only what is relevant to your current question. This works semantically, meaning Kaiya can find relevant context even if the words in your question do not exactly match the words used when the context was originally stored. For example, if you previously established that "growth" means percentage change year-over-year, and you later ask about "the increase rate", Kaiya can match those concepts and apply the same definition.
What this means for you?
You can ask multi-step, multi-turn questions without re-stating context. Each question builds on what came before. Kaiya handles follow-ups without repetition, maintains continuity when you switch between different analysis types within the same thread, and delivers more coherent answers across longer, complex exploratory workflows.
Last updated
Was this helpful?