2024 AI Predictions

Vianai Editorial Team
December 14, 2023

In 2023, there was an explosion of attention on AI. The hype was at an all time high - with AI promising to change the world. But as the year wore on, and in particular as we near the end of 2023, we have seen the realities that need to be addressed, in order to drive widespread adoption and value. In 2024, we think the focus on human-centered AI- that is, the reliable, safe and purposeful development and use of AI - will take center stage. This focus will drive attention on issues such as hallucinations, performance, user experience and more.

2024 Vianai Predictions

  • There will be an increased focus on the safety of AI. From a development standpoint, the race to be first took center stage in early 2023. However as the year progressed, it became clear that guardrails are needed and that serious limitations in the technology need to be addressed, including hallucinations, bias, toxicity, ethical and legal issues, and other problems. There was a growing recognition that we must have safe, reliable development and use of AI. In 2024, we think safety and reliability of AI will take over a much larger portion of the conversation around AI, with new innovators and entrepreneurs, and new tools and techniques emerging to solve these problems.

  • AI hallucinations, and their impact on AI adoption will continue to evolve based on the context in which AI systems are used. In 2023, the topic of hallucinations was treated with broad brush strokes. In 2024, this is likely to become more focused toward particular use cases. For example, hallucinations in fictional writing and poetry may be acceptable, while hallucinations in AI systems that power financial reporting to regulators and shareholders won't be acceptable -and could hamper adoption. As the use cases of AI continue to unfold, users and enterprises will become more sophisticated in deciphering between high quality solutions for their particular use case, and those that are less trustworthy or not usable at all. While the underlying technology in LLMs is unlikely to meaningfully change to solve the hallucination problem, domain-specific applications with anti-hallucination techniques built in will provide the best opportunity to fill the gap.

  • There will be increased government involvement in the innovation and development of AI. The late 2023 Executive Order on AI shines a spotlight on an important issue - the safe, responsible, ethical development of AI tools. In 2023, governments started rolling out their policies and frameworks for AI development, testing and use, with actions required in 2024 and beyond. In 2024, governments will grapple with how to enforce these requirements and policies, and AI providers will grapple with how to sufficiently address and meet the requirements. Enterprises, consumers and others will grapple with how to interpret all the noise. But even as this balance between self-regulation and government regulation plays out, we can rest assured that the government interest and involvement in AI will only increase, and likely accelerate in the coming months.

  • Enterprises will become more sophisticated about selecting the right AI models that make the most sense for their business requirements. Selecting AI models for specific use cases within an enterprise context is not a one-size-fits-all situation; there are pros and cons for every type of model on the market. Enterprises must do the work to determine what they need in order to obtain the desired results. This includes weighing the costs, ROI, performance, business requirements, privacy and security of the data against the best models available to them such as public, open source, or private. As enterprises become more familiar with the pros and cons of different types of models, in the context of their heterogeneous landscapes, the focus will turn to choosing the model or models that deliver real, tangible business outcomes while meeting the requirements for cost, privacy and other factors.

  • A more robust AI Ecosystem will take shape. AI infrastructure is expensive, and the cost of everything is high. Talent and skills in AI are scarce. Enterprise landscapes are heterogenous, distributed and messy. Finding a meaningful use case for AI, with the guardrails for reliability, security and privacy, across multiple data sources, suitable and safe to use in mission-critical functions, that brings value to the user and the business .. is highly complex. This complexity in getting to any business value of AI will spur a more robust ecosystem. In 2024, enterprises will need to determine if they have the resources to do this on their own, or whether an external partner makes more sense. In many cases, a partner, whether software and services, product engineering, staffing or other, will make the most sense- to deal with both the talent shortage and the complexity issues.

The complexities of bringing the full potential of AI to business users in heterogeneous enterprise landscapes are significant. That's why we are working with partners (Cognizant, KPMG) to bring the expertise, product capabilities, underlying tools and technologies, talent, and the implementation execution to help companies navigate the challenges and drive tangible business value fromAI. Read more about hila Enterprise.