On-Demand: Tellius 6.1: AI Agents Across Every Metric, Document, and Conversation
Analytically Accurate Answers Grounded in the Full Picture
Hello, and welcome to today's webinar titled AI agents across every metric, document, and conversation. My name is Chris Walker. I run product marketing here at Tellius, and I'm joined by Vinod, our head of product at Tellius. We are an AI analytics platform that leverages AI analytics for enterprise data to uncover insights, automate workflows, and accelerate decisions. In addition to expediting your analytics, Kaya, our twenty fourseven AI analyst, actually do the multi step multi part analysis for you. So today we're excited to walk you through the latest capabilities of our platform. But before we do that, I wanna set the scene. So enterprises have invested billions of dollars in BI tools, dashboards, data warehouses, reporting, just to understand what's happening. But when it comes to why that's happening, the real drivers of churn and deal loss and delays and product failures, those answers are buried in unstructured data. Some reports say it's up to eighty percent of enterprise data is unstructured. Customer calls, contract clauses, emails, chat support. So just before we walk you through what we're doing with six point one, I want to tell you a story that has nothing to do with analytics software, but everything to do with why we built six point one. About a decade ago, Wells Fargo, one of the most celebrated banks in the world, their structured data looked phenomenal. The average products per customer were industry leading, their account growth was up and to the right, revenue per household was best in class, and every dashboard KPI and quarterly report was all saying the same thing. This company was winning. But there was another dataset, the unstructured dataset that wasn't in any dashboard. In two thousand and four, internal investigators sent an email describing what they considered a growing plague of fraudulent sales practices. In twenty ten, there were over seven hundred whistleblower complaints logged through internal channels and ethics hotline transcripts, internal emails, complaints, all of this was not appearing in their KPIs. All the structured data said we're the best in the industry. The unstructured data said this is fraud. And so in twenty sixteen, this scandal broke publicly that there were three point five million fake accounts, dollars three billion in fines. The CEO was forced to resign. So I'm not saying anyone in the room is like Wells Fargo. That's not the point. The point is that eighty percent of your enterprise data is unstructured. It's in these various disparate places and you're not getting the full picture. You're getting the picture of what's easy to query, not the real and the why. And six dot one, that's the gap we built to close. We're not trying to do faster dashboards. We're unlocking the ability to analyze structured and unstructured data together in the same conversation with analytical vigor. Vinod, walk us through the key points of the release and let's go. Thanks, Chris, for the setup. Yeah. Like I said, there's a lot of amazing stuff that's been built in the platform. And if you have followed us for the last few months, six point zero, we announced agent mode, which allows us to bring in the power of agenting like a lot of the agenting capabilities into the system. And now we do a lot more, right? So three big ideas. The first is what Chris already referred to, which is going beyond just your structured data, that is, rows and columns. What we want to be able to do is bring in this unstructured data, which, like the previous slide said, is that eighty percent of the world has unstructured data. And there is an interesting aspect to this. And why is that the case? That's because it takes a lot of effort to convert unstructured data into structure. You need to have systems that can actually tag and pull it out and then put it into a structured database, then you can query it. And that's very hard, and it's expensive. So people only do it for the data they think is valuable or analytically critical, and they do it. And there is a process that companies have built over the decades to do this. But there's still enormous amounts of data that's sitting in, let's say, your call transcripts, your customer complaints, your support tickets, your online reviews, for example. And people often do things like sentiment analysis. Our goal is to say, can we bring the same rigor to the unstructured data as well? And that's what we support now with Talia six point one. And we'll talk about more on why we think our unique approach to doing this is the game changer, especially when you combine it with the structured data we already have in place. So if you have all your analytics plus your unstructured data, what can you do with it? We'll look at some of those use cases. The second is bringing the full understanding of your enterprise knowledge. So context is a really important term these days. You'll keep hearing that from other vendors as well. And not just that from the industry, the big players like OpenAI, Anthropicana, be talking about this. And as we all know, LLMs and for that matter, all these AI models are so powerful, but they also have problems because their context window is often dramatically lower, anywhere from two hundred thousand tokens to maybe max of million tokens with some of the newer models. But still, your business knowledge, for example, is billions of records, for example. Or you have call on nodes. You your semantic models. You have your workflows. You have ontologies. You have processes that you have laid out in PDFs and PowerPoints. And how do you bring all that information? More importantly, how do you do your analysis, and how does the system learn over time? And that's a big challenge. So one of the big things we implemented in Tellius to bridge that gap and, more importantly, give you that extra unlock is this notion of memory, right? Memory long term memory management. Long term memory is basically becoming more intelligent than what it was already. It now learns your workflows. It remembers key events. It remembers your preferences. I personally always, let's say, a bar chart. So if I give the preference, I will remember that. So the next time I ask for it, it's not going to keep asking the question again. If, for example, I refer to a particular drug by a different name, if that is an alias or a market name, it will remember that. So you can feed it as much information as you want. And we had a scene. We have a really smart engine that can actually understand the type of memory it is, process it, and then when you require it, smartly retrieve it and use it when applicable. So there's a lot of amazing engineering that has gone behind this to build it out. But from an end user perspective, you don't need to do anything. You just keep on using KIA as you do normally do, and the system will just keep getting smarter over time. And the third item that we ought to talk about is actually bringing a new modality for how we interact with Tellius as a platform. So voice, obviously, is a fast rising way to interact. And people have been talking to their phones, right? We have voice agents. But one of the challenges we have is it's not just sufficient to just use speech to text, which is very common, which transcribes, but actually being able to talk naturally or live with your phone agent. That's not being implemented. So QIA voice mode now allows you to talk to your data, and that follows your thinking. Right? Even and it's not it's not robotic. It's meant to, like, understand if you switch your context, you pivot in between, it'll follow that, and it'll be answering questions for you, actually. Thanks, Vinod. That's a great overview. Just before we go into the demo, I wanted to let you just try to tease out what these features can mean in real life for a typical end user. On the left, think about field rep, sales rep, right? Typically, if they're preparing for like a QBR quarterly business review, they've got three spreadsheets, forty five minutes of pivoting, bunch of filters they have to pick correctly. They may have an email off to the analyst to get some data or an answer. They're hoping that they get that back in time. They may even be like culling through a two hundred page PDF control F ing looking for the terms. Right? And basically that is a ton of work with using the capabilities of six point one, the voice mode, the enhanced memory and the unstructured to structured analysis. They could be on their way to the QBR and use their voice to ask, how's my account growing? How's it compared to the region? What drove the momentum? How's competitors doing in the same sites? And from the medical literature, not just a dashboard, how should I take my message to my, let's say the doctors that they're calling upon? So ten minutes and they walk into the meeting with their talking points. On the flip side, a regional manager, they typically are going through a similar process, but on the other side, right? They've got a request out for a specific data poll. They're hoping that comes back in time. There's probably an analyst supporting them, spending several days slicing regional rollups or working on other kind of specific analyses for that regional manager. Now with Kaya, that manager can just query this directly themselves with the business knowledge that they know. Maybe they want to compare performance across the Midwest. They want to drill into what's lagging. They want to benchmark that against the national average to do a deep analysis and then export that into a QBR ready slide deck. So Tellius, as I mentioned earlier, doesn't just assist, it does assist you and it helps your multi part analysis really go faster and better, but it also produces ready made artifacts or assets such as a slide deck. So they walk into that meeting with their diagnostic, not just a data dump. So what's interesting in here is that these two people may not even be coordinating. They didn't have to schedule sync. They're just attacking this problem using a platform that allows them to get what they need. And they've created this coordination and answers, not another meeting. Yeah. With that, love to, of a note, have you walk us through the capabilities. Maybe walk us through high level and then show us the capabilities. Yeah. So let's do a little bit of a deep dive on what the capability itself is, and then we'll talk about specifically what are the unique things that we've added, and then we'll get to the demo. So first is this support infrastructure documents that we mentioned earlier. So you might be wondering, okay. What's the new about that? I can load, let's say, PDF to my change your PDF. And you're right. You can do that in at least small. But, typically, that's one or two documents. But then let's say you have a large corpus of documents, your enterprise data sources. So you have, like, your entire Google Drive, for example. You want to index all of that. That's very That's a lot of documents. And if you give it to the LLM directly, the limited context will struggle. It can take maybe few files, ten files maybe. That's about it. So Kya now is built for handling extensive documents. Like, it has all the context built in. It can extract all the context and then smartly inject it back to the LLL when required for a particular question. The second aspect of this is what is actually the indexing? And even here, we do something interesting. We are not just doing like a simple rag on your data, which a lot of your vendors will do, But what we're doing is also trying to understand the knowledge, understand what kind of entities are there, what kind of document is it. For example, if it's a pharma, peer policy document. There are a lot of specific nuances in those kind of documents that you need to extract. And by having that awareness, we basically capture that full information, capture it in the system, and that allows us to then do the next step right away, which is the semantic retrieval. The retrieval is actually not just a keyword search. We do semantic search, but we also do smart semantic search. This is that we can now use all the entities we extracted earlier to smartly figure out, Okay, what is this user asking about and be able to hop across multiple sources. So a single question might require you to pull ten different documents, take individual parts from each of those documents, bring them all together, and summarize. Something like doing this is very hard for you're just an off the ship rag engine with LLF, right? Whereas Takaya is built just to deliver that. And we have a lot of benchmarks we did that eval to validate, and we found that this constantly outperforms anything else in the market in terms of the relevance, usefulness, and accuracy. The next is, of course, citations. We want to make sure that everything when you use our structure, especially when you're using these documents, we are able to give you exactly the precise inline citations, which part of the line or chunk of our paragraph was used for this analysis. We want to bring it back and show it to you. We're also trying to do this across enterprise grades. We have ACLs built in, meaning that the permissions are there too. If you have enterprise permissions, we can incorporate those, or we can set up IntelliS itself. So people can only see information from documents that they are supposed to have access to. So one of the challenges is, of course, if you just do like a index all your data and try to give it to a chatbot, it'll basically give you access from documents that you're not supposed to have access to. And setting all that up is a lot of work, and we've done all that to make sure that you have access to the right documents. And finally, we are also doing some really a lot of smarts around, like, when we extract the information, we are trying to see, okay, is there structure to it? For example, if there is a table in the document, right, in a Word doc, or if there is layout, we try to take into account the layout so that we are incorporating all the signal that is in the document, that nothing is missed. So as much as possible, avoid loss in extraction. So that's a key part of that's some of the underlying engineering work that has gone to make this a really robust offering, not just like a rag on top of your structured documents. Yeah, diving into the memory aspect of it, which is another next big feature that we are so excited about. We talked about it earlier, right? So there's context. So context can come in different ways. And what I mean by that is when you are in the middle of a current session, let's say you are chatting with chatbot, right, in this case, Chayil Kaya, as you're asking questions back and forth, you really wanted to remember what I asked before. Right? So that's the what you call short term memory. Short term memory is obviously there, and that itself is powerful. Especially if, let's say, you wanna refer to a question that was asked back earlier in the thread, you want to be able to do that too. So that's something that we do, and you can ask questions back and forth, and it will remember all the context. That's the first level. The next level is what we call think of it as business memory. So here you can put in your domain, your events, your entities, any seasonality patterns that you as a business observe, any domain knowledge that you have. Right? You can bring all that all those pieces in over here. You can put in things like, hey, my territory is this. And this is also a place to do personalization. Let's say there is one hundred users, and each user has their own preferred product. They have their own profile. They have maybe the territory or region they care about. So all those things, Kayak can remember that and be smart about it. For example, if I'm asking about my territory, I don't need to filter for this person's territory and then find all the information. Right? And then if you step back on whatever, you have what is, I guess, this overall behavioral and procedural memory. And what that does is it actually thinks about, Okay, what are the preferences? Like, how does this organization work? For example, they always look at the weekly sales analysis, and based on that, they come up with take actions, right? Understanding that. Or looking at how you do a particular complex analysis. If, let's say, sales in a particular region is declining, what do they typically look at? Which accounts are declining in that region? And look at the number of calls made. And then based on that, try to come up with a plan. Knowing that procedure, and then you can use that in the future when you do analysis, you also look at patterns or details. For example, in certain customers, like in CPG, we have this notion of hierarchy. A particular hierarchy could indicate what level of granularity you want to go into the data. And that information can be stored as well so that when required analysis, you can use that to expand the analysis. And this is the kind of stuff that the system keeps getting better and better as you feed it more information, and it learns from your patterns, behaviors, and usage and becomes more and more integral to your analysis and your enterprise's knowledge. The third item that we talked about, which is the voice mode. So I want to talk a little bit about why it's not just yet another chatbot with the voice agent. So traditionally, happens when you do a voice agent, you have speech to text, then it's given to an LLM. LLM will then give a response, and then the response is then read back to you with Talk to Speech. This is the first generation of most of these tools, right? Very common if you have encountered any of those old tools like Alexa, even Siri for the longest time, and a lot of the other tools as well. But what you realize then is the problem is that our normal conversation doesn't go like that, right? You don't typically ask a very well framed question and then go back, right? You might ask a question where you go back and forth, or you might change your mind. You might include the ums and uhs. You might mid sentence rephrase a question. Or even when the analysis has started, might say, You know what, hold on, let me go back and change the question later. So you want something that is truly natural, right? There's things like turn management, which is called, like, how do you handle back and forth? We want the system to be remembered, the analytical state, meaning that we are in the middle of this conversation about, let's say, commercial analytics. We want to remember that and then use that in context while you're responding. We want it to also know what are the questions the user is asking about and use all the knowledge you have. In the previous screen, we talked about the context, right? Use that context to smartly figure out what is the user asking about so the user doesn't need to explain in full detail. So that's what Kaia's Voice Mode brings into the system, which makes it truly natural and easy to use for, let's say, a sales rep who's going to meet a customer. They are on the road, and they could start just start using it. Alright. Awesome. Let's So Yeah. Let's get into the demo now. I'll stop sharing. Those three were the key capabilities, but there's a whole lot more as well. So after the demo, I'll just splash the screen to show you the other capabilities or the other goodies that are baked into this one. The media release. Right. So let's jump to a demo here. So what I can do is I can ask questions directly in in unstructured or structured tech. For example, here, I'm asking this question, which specialties are seeing a decrease in call volumes over the last thirty weeks? As you can see, the I ran this query just a few minutes ago. It ran, and it's telling that, okay. Between dermatology, rheumatology, and other these are the three developments of specialties we have in the data. You've seen the decline in calls across the different groups. Right? Like, dermatology saw the largest drop. Neurophilology seems to have been lower. Right? So now if you wanna ask a question something like, which, physicians are, writing, are most active in, let's say I wanna talk about rheumatology. Actually, I'm gonna pick up rheumatology in the. And now what's gonna happen is here, Kyle will understand the question, and it's I'm actually gonna try and see that can I answer this question using additional data? Right? For example, it might go into the constructor calls, and there is some unstructured data we have on speculations, which are based on specialties. So it's gonna go to that question and then pull that up. And by the way, I ran this question a little while earlier to see which HCPs are doing a regular prescriptions for rheumatology. And here, actually found out that in this case, doctor Alvarez is at academic medical center is doing a lot of work around rheumatology. You can see that there's a citation here. The citation is actually the report, copy of a visit report, and I can read more. If you click on this, it'll actually show me the exact visit report and all the details. So these are the raw physical, like, reports that are being used to come up with this unstructured data analysis, and that is used to augment the structured analysis you can see. The nice thing with this is I can actually go and open the source, and that'll actually open and take me to the, original report if I have access to it, and I should be able to see it differently. Right? This actual raw report. In many cases, you may not may not have it. So our unstructured support also comes with all enterprise guardrails. It has supports for ACLs. So only the documents that I have access to will be shown. For example, if I ask a question that I don't have the access for, the system will say that, hey. There is an answer for this, but you can't see it till you get access to the raw documents. So we make sure that we don't show anything to users that they are not supposed to access. Right? And, again, let let it go around this. So that is sort of like how unstructured analysis works in the system. So I'm gonna ask some of the some initial follow-up questions that could be, for example, interesting. Right? So here so those are the questions I'm gonna ask. Let's see. On, how do physicians perceive ergobics, which is one of the other drugs that we're looking at compared to that, and then compare it with competitors like Formagon, Lupron, Eligard, etcetera. Right? Then we'll hit enter and then let's see what it comes up with. So based on the question, the system smartly should figure out, okay, this question probably needs to be answered by unstructured analysis. So it's gonna pull up the documents, look up the analysis, and then come back and tell us what the appropriate answer is. So there's a few things happening over here, which is pretty cool. Based on the question, you noticed that I did not change the data source or data model. I did not pick anything. It's mildly because of that this question cannot be answered with the data model that we picked and the structured data sources, so it needs to then look at unstructured data sources. And it has a sense of what data we already have stored, the in the metadata. And based on that, it finds the right unstructured data sources, and then it's gonna go search them and find the correct answer for us. Okay. So that computer running, and it's now giving me some good information on, again, based on the data we already have access to, to the sales documents, visit reports, etcetera. So we are able to pull up the information. We have some sales data here, some visit reports, it's a combination of different structured data sources. And using all of that, it's able to answer this question of how physicians perceive the value of Ergobix as competitors. So it's a pretty detailed report. Talks about the efficacy, the safety. Also, it looks at the competitors as well. Looks at convenience. Gives me a nice summary table as well, which is very valuable. So if you are, for example, sales rep who's going to talk to a CP about this, or if an HCP has a question about safety or efficacy and value, you can quickly pull up this analysis and then use that to provide answers to the prescribers. Sorry. And as I mentioned, everything is grounded in your data. So for example, I can always look up the source. I can read the original raw data if you want to, but the chunks are all over here. I can also open the raw source if I need to, which will open with the in this case, the data is in Google Drive, but it could be in s three or other places as well. So it's super valuable and very trusted trustworthy so that the data we are referring to is all built on your system. Keep in mind, we can also enable a web search. So if you want, you could ask the system to go out and search for public data as well if required. So, yeah, what we were talking about in purple box here, QIA, Genetic Analytics, the unstructured data analysis, enriching our structured analysis with unstructured, the voice mode we talked about. One thing we didn't cover was, or maybe hinted at, was this web search integration, which is really interesting. It's the idea that we're able to use a limited web search to identify kind of anchor points in real world data and then use that as filters or other constraints within our deterministic analytics engine. For example, you could say, Hey, how did the Super Bowl impact sales? And it'll do a quick search to identify the exact dates of the twenty twenty six Super Bowl. And then it'll use that as the filter for your data set to then identify the patterns that emerged following that. These types of things, are really handy and really expedite and enrich your analysis. And with, again, full transparency into what was identified as well. On the right hand side with memory knowledge personalization, Vinod talked about the multi layer of context management that you can do and how that enriches your analysis. The next one is reflection based auto correction. So it ensures maximum accuracy. And then the one after that is the orchestrator, which is an improved orchestration of multi part compound queries. And then to our core platform, there's been a bunch of other actually very exciting as well. So the ability for full data export of your Excel workbooks, the ability to export conversations and put them into board ready presentations, PowerPoint presentations, automatic labeling of charts, aggregation support in advanced pivoting, so much more. Yep. Cool. Thanks everyone for attending here and we'll follow-up after this meeting. Thank you. Thanks all.


Your enterprise data lives in two worlds: structured metrics in dashboards, reports, and spreadsheets, and everything else — contracts, call transcripts, research reports, payer policies — buried in documents. No connection between them. Tellius 6.1 changes that. See how Kaiya orchestrates a team of AI agents — some reason over your metrics, others over your documents and knowledge base — to deliver analytically accurate answers grounded in the full picture.
This isn't a feature walkthrough. It's a fundamentally different way to work with enterprise data. Whether your team runs pharma commercial analytics, CPG category management, revenue operations, or FP&A — watch how AI agents that understand both your numbers and your documents turn questions you never thought to ask — or previously were impossible to ask — into decisions you can act on.
What You’ll See:
⭐ Beyond Rows & Columns — Unstructured Data Analysis
Kaiya's AI agents now analyze PDFs, call transcripts, research reports, and policy documents alongside your structured data. Connect Google Drive, S3, Azure Blob, Gong, and more. Ask "What objections came up most in deals we lost?" and agents analyze 200+ calls, rank patterns, and cite sources — with document-level citations on every answer.
⭐ Contextual Memory — Agents That Know Your Business
Kaiya retains business context across turns and sessions. Ask "Did Thanksgiving impact TRx for Product X?" and Kaiya looks up the timing, applies the right comparison window, and keeps that context for follow-ups. No manual date lookups. No starting cold every session.
⭐ Just Talk — Voice-First Analytics
Speak naturally, change direction mid-sentence, and Kaiya keeps up. Voice mode handles multi-turn conversations with real speech patterns — pauses, corrections, mid-thought pivots — not just transcription. Field reps get pre-meeting answers without typing.
⭐ Smarter Agent Orchestration
Kaiya auto-routes every question to the right analysis path — SQL, Python, or Deep Insights — while a Reflection agent checks outputs for accuracy before delivering results. Your team asks questions. The agents figure out the rest.


























Ready to see Tellius in action?
No spam—we hate it as much as you do!
