Generative AI is evolving. Knowledge-based applications such as AI chatbots and copilots are giving way to autonomous agents that can reason and execute complex, multi-step workflows. These are made possible by what is known as agentic AI. This latest development in AI is poised to transform the way businesses operate by understanding context, setting goals, and adapting actions based on changing circumstances.
With these capabilities, agentic AI can perform a range of tasks previously thought impossible for a machine – such as identifying sales targets and creating pitches, analyzing and optimizing supply chains, or acting as personal assistants to save time. manage employees.
Amazon’s recent partnership with Adept, a specialist in agentic AI, signals a growing recognition of the systems’ potential to automate diverse, highly complex use cases for business functions. But to fully utilize this technology, organizations must first overcome a number of challenges with the underlying data, including latency issues, data silos and inconsistent data.
Rahul Pradhan, VP Product and Strategy, Couchbase.
The three foundations of agentic AI
To successfully operate its complex functions, agentic AI needs three core components: a plan to work from, large language models (LLMs), and access to robust memory.
A plan allows the agent to perform complex, multi-step tasks. For example, handling a customer complaint may involve a predefined plan to verify identity, gather details, provide solutions, and confirm the resolution.
To follow this plan, an AI agent can use multiple LLMs to break down problems and perform subtasks. In the context of customer service, the agent can call on one LLM to summarize the current conversation with the customer, creating a working memory that the agent can refer to. A second LLM could then plan the next actions, and a third could evaluate the quality of these actions. A fourth LLM could then generate the final answer that the user sees, informing him or her of possible solutions to their problem.
And just like humans, AI systems cannot make informed decisions without using memory. Imagine a healthcare assistant AI that can access a patient’s medical history, health records, and previous consultations. By remembering and drawing on this data, the AI can provide personalized and accurate information, explain to a patient why a treatment has been adjusted or remind them of test results and doctor’s notes.
Both short-term and long-term memory are needed for tasks that require immediate attention, and to understand the context on which the AI can rely for future inferences. But here lies one of the biggest obstacles to the optimization of agentic AI today: often companies’ databases are not advanced enough to support these memory systems, limiting the AI’s potential to provide accurate and personalized insights supply is limited.
The data architecture needed to support AI agents
The predominant approach to meeting memory system requirements is to use dedicated, standalone database management systems for various data workflows. However, the practice of using a complex web of these self-contained databases can hurt an AI’s performance in a number of ways.
Latency issues arise when each of the different databases used has different response times, causing delays that can disrupt AI operations. Additionally, data silos, where information is isolated in separate databases, prevent the AI from gaining a unified view and hinder comprehensive analysis, causing the agent to miss connections and produce incomplete results. And at a more fundamental level, inconsistent data (due to variations in quality, formatting, or accuracy) can also cause errors and skewed analysis, leading to faulty decision making. Using multiple database solutions for a single purpose also creates data sprawl, complexity and risk, making it difficult to track the source of AI hallucinations and debug incorrect variables.
Many databases are also not well suited to the speed and scalability that AI systems require. Their limitations become more apparent in multi-agent environments, where fast access to large amounts of data (e.g., via LLMs) is essential. In fact, only 25% of companies have high-performance databases that can manage unstructured data at high speed, and only 31% have consolidated their database architecture into a unified model. These databases will struggle to meet GenAI’s demands, let alone support any kind of unrestricted AI growth.
As GenAI evolves and agentic AI becomes more common, unified data platforms will become central to any successful AI implementation by organizations. Updated data architectures deliver benefits by reducing latency with edge technology, efficiently managing structured and unstructured data, streamlining access, and scaling on demand. This will be a key development in building cohesive, interoperable and resilient memory infrastructures and enabling companies to finally benefit from the automation, precision and adaptability that agentic AI has to offer.
Embracing the AI revolution
Agentic AI opens the door to a new era where AI agents act as collaborators and innovators, fundamentally changing the way people interact with technology. Once companies overcome the challenges associated with disparate data sources and optimized memory systems, they will enable the widespread use of tools that can think and learn like humans, with unprecedented levels of efficiency, insight and automation.
We have offered the best online SQL courses.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro