How businesses can prepare for the age of agentic AI
By Adi Polak, Director of Advocacy and Developer Experience Engineering, Confluent
Friday, 07 March, 2025
Artificial intelligence has had a transformative impact on the way we do business — and it’s only accelerating. As organisations adapt to AI-driven efficiencies, a new evolution is emerging to redefine business operations, possibly more than future generations of foundational models: agentic AI.
Unlike earlier technologies, which were rules-based and had limited ability to act independently, agentic AI engages in complex multi-step processes, often interacting with different systems to achieve a desired outcome. Imagine an AI-powered help desk that uses natural language processing to understand and process support tickets — autonomously resetting passwords, installing software updates, and escalating issues to human staff when needed. Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from 0% in 2024, enabling at least 15% of day-to-day work decisions to be made autonomously.
But while the possibilities are exciting, the journey to implementation is not without its hurdles. Enterprises must be prepared to address several critical issues before fully embracing agentic AI to ensure its reliability and effectiveness.
Logic and thinking
Agentic AI operates through a network of autonomous agents, each with distinct roles. At the core is one agent that acts as a ‘planner’ and coordinates the actions of multiple agents while another model provides a ‘critical thinker’ function, offering feedback on the output of the planner and the different agents that are executing on those instructions. This feedback loop enhances the model’s insights over time, resulting in progressively better outcomes.
However, for this process to work reliably in real-world applications, the critical-thinker model needs to be trained on data that is as closely grounded in reality as possible. This includes detailed information about goals, plans, actions and results, alongside extensive feedback. Achieving this level of accuracy is no small task. It can often require numerous iterations to provide the model with sufficient data to act reliably as a critical thinker. But without this foundation, agentic AI risks producing inconsistent or unreliable outputs, limiting its potential as a trusted business tool.
Predictability and reliability
For decades, interacting with computers has been a fairly predictable process — users provide clear instructions, and the system follows them step by step. Agentic AI changes this dynamic by allowing teams to lead with an outcome they want to achieve rather than with step-by-step instructions. The agent then autonomously determines how to achieve the goal, which introduces a degree of unpredictability in the process.
This randomness isn’t new. Early generative AI systems, like ChatGPT, faced similar challenges. But in the last two years, we’ve seen considerable improvements in the consistency of generative AI outputs, thanks to fine-tuning, human feedback loops, and consistent efforts to train and refine these models. We’ll need to put a similar level of effort into minimising the randomness of agentic AI systems by making them more predictable and reliable.
Data privacy and security
Some companies hesitate to adopt agentic AI due to heightened privacy and security concerns. These risks build on those seen with generative AI and other systems.
With large language models (LLMs), any data provided to the model can become embedded within it. There is no way for the model to ‘forget’ that information. Security attacks like prompt injection exploit this to extract proprietary or sensitive information. Agentic AI raises the stakes further because these systems often have broad access to multiple platforms, increasing the risk of exposing private data across various sources.
To mitigate these risks, companies must take a structured, security-first approach to implementation. Starting small is key: businesses should containerise the data as much as possible to ensure it is not being exposed beyond the internal domain where it is needed. It is also critical to anonymise the data, obscuring the user and stripping any personally identifiable information from the prompt before sending it to the model.
At a high level, we can look at three different types of agentic AI systems and their respective security implications for business use:
- Consumer agentic AI: Typically an external AI model accessed via an internal user interface. Companies have no control over the AI itself — only over the data and prompts they send.
- Employee agentic AI: An internally built AI used within the company. While this set-up minimises risk, there is still the concern that it could lead to exposure of highly private information to non-qualified users in the company.
- Customer-facing agentic AI: An AI system built by a business to serve its customers. Since there is some risk with interacting and working with customers, effective segmentation is essential to avoid exposing private customer data.
Data quality and applicability
But even with strong privacy measures in place, agentic AI is only as effective as the quality and relevance of the data it relies on.
Too often, generative AI models fail to deliver the expected results because they are disconnected from the most accurate, current data. Agentic AI systems face additional issues because they interact with multiple platforms and data sources, pulling information dynamically to execute tasks.
This is where data streaming platforms (DSPs) play a crucial role. By enabling real-time data integration, DSPs connect agentic AI to accurate and reliable information that can then be used to deliver relevant answers. Solutions like Apache Kafka and Kafka Connect allow developers to bring in data from disparate sources, while Apache Flink facilitates seamless communication between models. These tools ensure that agentic AI systems are effective, overcome hallucinations and generate trustworthy results that are grounded in trustworthy, fresh data.
The path forward
AI is still new territory for many companies, and fully taking advantage of the benefits the technology offers takes time and significant investment. Many businesses will need to purchase new hardware and GPUs, and create a new data infrastructure, particularly with new memory management for caching and for short-term and long-term storage. Beyond technical requirements, companies must also build in-house inference models and develop or hire talent with specialised skills in AI. Return on investment will take time.
Despite these challenges, agentic AI is on track to follow the same rapid adoption curve as generative AI. We’re already seeing some AI technology vendors move in this direction, and enterprises that prepare for the age of agentic AI now will be in the best position to reap the benefits later. While the upfront investment is significant, the potential impact could far exceed that of generative AI alone.
Driving DEI and data innovation one step at a time
For International Women's Day, Keir Garrett of Cloudera reflects on her journey as a...
Explainable AI: building trust and creating value
Explainable AI (XAI) is critical in fostering trust and delivering tangible business value amid...
Balancing the optimiser and innovator mindset in IT strategy
The tension between cutting costs through efficiency and driving growth through innovation is...