Dr. Sanjay Rajagopalan, Chief Design & Strategy Officer of Vianai, sat on a panel at the 2022 Ai4: Artificial Intelligence Conference this past fall to participate in a lively discussion on responsible AI, an example that this topic has been part of Vianai's mission since the beginning.
Titled "Human-Centered AI: Adopting AI Responsibly Across the Enterprise," the panel discussed how adopting responsible AI is critical to the long-term success of businesses. They explored the essential aspect of building trust between humans and AI by ensuring AI solutions are responsible and built with employee and user needs in mind.
During the panel, he discussed the risks a company faces if they don't have sound systems in place to ensure the responsible deployment of AI solutions. Considering the infrastructure for explainability, monitoring, transparency, validation, and governance in AI systems is crucial. These kinds of tools, processes, and policies must be in place to avoid catastrophic events, which could have a disproportionate financial impact on the business - derailing ROI, among many other things.
Dr. Rajagopalan highlighted that human-centered AI and responsible AI are both highly dependent and interlinked. AI needs to work side by side with humans to amplify their ability, while preserving the uniquely human things only people are capable of. There is already a deficit of trust in this area because of the notion that AI will replace humans, so the enterprise must work towards closing this gap.
Dr. Rajagopalan acknowledged that the vast majority of use cases which involve AI are not replacement use cases; instead, they are assistance and application use cases - especially in the enterprise. Human-centered AI entails having tools and frameworks to build solutions with longevity and sustainability. This is where the enterprise starts to see responsible AI becoming a critical aspect of building trust between humans and AI. Dr. Rajagopalan shared the idea that responsible AI is a necessary type of capability to actualize the vision of human-centered AI.
Being reactive entails identifying the root causes of issues quickly and retraining models when and if needed. To be successful, companies must have monitoring tools to identify the issues, along with the tools and frameworks set up to fix any problems they face, preferably in near real-time. Being proactive, however, means having the tools in place to alert you before something happens, so the company can take corrective action. A proactive AI system requires you to have mechanisms that look at trends and provide a diagnostic solution before the issue causes harm.
Being designful when building systems that do not fail in the first place is essential in responsible AI. Building systems, frameworks, and products for longevity, sustainability, responsibility, and reliability help Vianai show organizations where capabilities drive specific outcomes around reactive, proactive, and designful behavior. These tools are what ultimately lead to an increased trust between humans and AI systems.
On the panel, Dr. Rajagopalan also discussed operating at scale and how it allows risks to compound exponentially. When you're operating at scale, there are many aspects that you can no longer individually track. Because of this, you rely on the processes, policies, and frameworks your system has in place. These tools will bring the AI system to production, monitor, govern, and take them off if they're not performing well.
Dr. Rajagopalan, and Vianai as a company, believes AI complements humans, offering scale and repeatability that humans cannot easily replicate. If we bring these systems to work at scale, responsibly combined with human judgment and reasoning, we can increase the value of AI.