Beyond the CoPilot — Providing real enterprise value for generative AI

September 27, 2024

Many companies have built a chatbot they call a “copilot” on top of their existing technology offerings, but the true value of generative AI comes from connecting directly to the data .

Business intelligence platforms have all released various chatbots since ChatGPT took the world by storm. These chatbots sit on top of their platforms but suffer from the same fundamental flaw — the very platforms they’re sitting on. These platforms were designed for developers and technical people, and so often these so-called “copilots” are intended to make the technical work simpler, not answer the needs of a businessperson.

First, the current approach is not working. Take, for example, the amount of data in a company, which has grown exponentially over the last two decades.

Yet, according to a Deloitte study, only 37 percent of companies use analytics and dashboards broadly across their companies. This, as the graph above shows, has remained stagnate for two decades, and these dashboards are often only manipulatable for the most technically savvy or specialized firms or groups that can work the data in a way that business users can consume on a semi-regular basis.

But business users want to understand the dizzying complexity of a modern enterprise. They want to understand it in real-time, with the data in a relatable and intelligible format. They don’t want dashboards, but they do want answers.

Not only do they clearly not want dashboards, these dashboards are slow. According to a white paper “Dashboards are Dead,” any requests for additional information take on average 4.5 business days. And they’re expensive, with each costing around  $18,000, due to the need for a technical team to work with the data. Services firms have created a business worth $320 billion per year to answer this need, according to Gartner.

The better way

Bolting on a copilot to current business intelligence software misses the vastly simplified user experience, the significant productivity improvements and the core value of LLMs. At their core, LLMs function as translation services — in this case, from a human to a database and back again.

Building a generative-AI-first application, such as hila Conversational Finance, takes advantage of the unique characteristics of LLMs, such as their adaptability and their autonomy.

First, LLMs are very different from other software. They “hallucinate” and do not provide the same response from the same prompt. This is unique among software, which has, to-date, been largely “neat” — or provided consistent responses to consistent queries. This “fuzziness” inherent in an LLM, though, also allows for a new kind of fluency — the ability to translate from English into a variety of languages, including coding languages. There’s an inherent understanding of syntax in an LLM, which enables language, with its imprecision (and fuzziness) to become neat, executable code.

Next, LLMs can run autonomously at complex tasks, complimenting each other with distinct focuses. For example, “agents” can have instructions and operate to achieve tasks that are distinct from the initial query. For example, when hila Conversational Finance receives a query, there are eight separate agents that perform various functions – from retrieving the data from the system of record to creating the chart and providing the explanation. These agents work in concert with each, and key models are separate, fine-tuned and domain-specific.

These agents work best when they work directly on the code, or as guardrails against the possibility of hallucinations. This is a level of complexity that steps beyond a chatbot and enables generative AI to deliver significant value. It is something that hila Conversational Finance does, but the various "copilots” simply cannot do. They work with the stack they inherited, which has been built over decades.

The iPhone greatly reduced the bar of adoption for a smart phone. It removed the stylus and keypad, and replaced them with our fingers. LLMs when effectively managed, enable a new way of working. Instead of a new knowledge set to query a database, they require only knowledge of English.

How Conversational Finance changes everything

To start, the data in a system of record can be interrogated similarly to a person in a meeting — using natural language. This vastly simplifies the entire technical chain, removing the various people and software that once stood in between a business user and an insight.

Today, with Conversational Finance, any business user can ask a question. The responses come back within seconds, instead of days. Follow-up questions do not require additional cycles. The language needed to operate with is the native language of the user — and our users often work in English, Thai, and Japanese, to name a few — with the response in clear tables and graphs.

The speed of the responses allows interrogations, follow-up questions, and unlimited responses, and vastly undercuts the cost of the dashboard creation process. Instead of $18,000, the cost is the software, with potentially a managerial review.

Conversational Finance brings together various models, fine-tuned and domain specific, and has them always ready, always on call, tireless and available for analysis of the data in real-time.

Find out more and contact us here.