If you ask Craig LeClair, VP and Principal Analyst at Forrester, agentic AI doesn’t actually exist. Not yet, anyway.
While that stance may seem at odds with both market buy-in and industry hype, LeClair and his Forrester colleagues argue that “agentic” has become one of the most overused terms in technology – applied so broadly that it risks strategic misdirection for the organizations trying to build around it. If everything is “agentic,” then really, nothing is.
So how does that account for organizations like Nintex, which recently announced Nintex Agent Designer as one of two exciting new orchestration capabilities in CE, or the organizations investing in the promise that agentic AI will fundamentally change the way work gets done?
Of course, AI agents exist. But to Forrester, “an AI agent” does not “agentic AI” make. The distinction comes from how independently – and how consequentially – these agents operate. Are they really agentic, or are they more … agentish?
In conversation with Niranjan Vijayaragavan, Chief Product and Technology Officer at Nintex, LeClair laid out a more precise agentic AI framework that can help organizations assess their AI capabilities and plan accordingly.
The distinction between agentic and agentish AI
Most “AI agents” in market today fall into one of three categories: insight agents, solver agents, and worker agents, all of which lack the specific action components necessary to be truly agentic.
Insight agents are essentially LLMs used in standalone search and RAG configurations to surface information. They are useful and productive within limited boundaries, but passive by nature.
Solver agents can handle specific, bounded tasks in an otherwise deterministic process. “There are some real good examples of solver agents in production,” LeClair said. “We recently explored some for anti-money laundering. Where traditionally, a human would examine a flagged alert from a transaction system to determine whether it’s a false positive, these tools let the AI models make that determination – and they’re pretty good at it. They can make connections between geographic endpoints, social endpoints, and core data endpoints, to eliminate about half of the false positives that would have been routed to a human. But that still leaves the full end-to-end process for AML in place – you’re just automating a specific step. We don’t consider that truly agentic. It is agentish.”
Worker agents, he says, are “more interesting,” but again, not truly agentic. In the case of worker agents, the model is doing the orchestration for the work pattern itself – independently flipping to preconfigured or pre-built automations like an RPA bot, an API workflow, another model endpoint, or a human endpoint. Impressive, but still agentish.
So what does constitute a true agentic model?
LeClair points to the as-yet nonexistent “executive agent,” able to make its own independent connections across multiple information domains, as a model that earns the classification. To get there, it needs to do three specific things.
The three criteria for truly agentic AI
To qualify as truly agentic under Forrester’s definition, an AI model must be able to do these three things:
- Create tools or patterns at runtime that didn’t exist before. In truly agentic AI, the system understands the goal and realizes it doesn’t have something in its library of available assets to reach that goal, so it creates something new, like building an API on the fly.
- Collaborate in a sophisticated way among sub-agents. Agentic sub-agents are going to be under control of a master agent, they’re going to have conflicts, and they have to be able to interact with each other – and with the master agent – to resolve those conflicts and move forward. “In tests, the agents have formed their own language and do things very efficiently in ways no human designer could anticipate,” said LeClair.
- Self-optimize. True AI agents can improve how the system runs from information and data within the running process itself.
The systems in production today don’t quite meet all three criteria. But that’s okay. In fact, agentish may be exactly what your organization needs in 2026.
Why agentish might be the perfect AI starting point for your organization
Much of where AI fails to meet organizational expectations stems from a mismatch in expectations. Not so much between the organizations and the AI vendors, but within the organization itself.
According to recent Forrester research, only about 15% of organizations are showing real, measurable productivity from AI agents in their current state. Another 40% are in what Forrester calls “proof of concept purgatory” – they’ve undertaken a use case, applied models to it, and produced something genuinely interesting. But they’re not comfortable moving it into production. And roughly 15% aren’t doing anything meaningful yet, sitting in neutral and waiting to see how things shake out.
That means many organizations are either stuck or just getting started. Yet the conversation in boardrooms, at conferences, and in vendor briefings would have you believe agentic AI is already transforming enterprise operations at scale. So where does the disconnect come from?
- Inability to scale. A well-constructed demonstration can make a solver agent – one that’s handling a single, well-scoped decision within a tightly controlled workflow – look indistinguishable from a fully autonomous executive agent reasoning across an entire business process. The leap from “this looks impressive in a controlled environment” to “this is ready for our mission-critical operations” is enormous, both in practicality and in ROI, but it’s easy to lose sight of that and get caught in proof-of-concept purgatory.
- The taxonomy problem. When analysts, vendors, champions, and users all use the word “agentic” to describe a remarkably wide range of capabilities from basic LLM-powered search tools to multi-agent orchestration systems, buyers get confused about current capabilities. This isn’t unique to AI: Every major technology wave goes through a period where the language outruns the reality. The answer isn’t skepticism about the technology itself, but rather a more precise vocabulary for talking about it. That’s exactly what the agentish versus agentic distinction is designed to provide … and it’s why platforms that are transparent about where their capabilities sit on that spectrum are better partners than those who paper over the difference.
- Internal pressure pushes perception forward. Forrester’s 2025 research found that automation strategy is now owned at the CIO or COO level in 61% of organizations, up from 45% just a year prior. That’s a meaningful shift, and it reflects real top-down urgency to show progress on AI. When senior leadership is watching and expecting results, there’s a natural tendency for teams to frame their proof-of-concept work in the most favorable terms possible. Agentish becomes agentic in the retelling, not out of dishonesty, but out of the pressure to meet expectations.
The practical consequence of all this is that many organizations are making strategic decisions about platforms, governance models, resourcing, and timelines based on an inflated picture of where they actually are. The value being generated at the agentish tier is real and meaningful, and for most organizations it represents the single biggest near-term opportunity on the table.
You say you want agentic AI … but are you ready to trust it?
Even among organizations that understand the difference between agentish and agentic, many that say they want full autonomy aren’t prepared to hand over control when the moment arrives.
When Forrester asks technology leaders what keeps them up at night about AI adoption, the answers are remarkably consistent. Explainability tops the list almost every time – the ability to understand how a decision was made, to trace the logic of an outcome, to answer a regulator, an auditor, or a board member who asks “why did the system do that?”
In a deterministic automation environment, that question has a straightforward answer. The logic was designed in advance, the routing rules were written by humans, and the audit trail reflects a process someone consciously built. In a truly agentic system, the answer is fundamentally different. Decisions are made autonomously, connections are formed in ways no human designer anticipated, and the system may optimize itself in ways that are difficult to reconstruct after the fact.
That’s not a flaw in the technology so much as it is the technology working as intended. But it creates a profound organizational challenge: The very things that make executive agents powerful – their autonomy, their ability to act in novel ways, their capacity to form novel solutions – are precisely the things that make organizations uncomfortable handing over control.
Data governance compounds the problem. Proprietary data is the essence of any meaningful agentic deployment, but it’s also the thing organizations are most protective of.
Concerns about data leakage, unsanctioned model usage, and the creeping sprawl of AI tools brought in through vendor upgrades or individual employees are widespread. Many organizations are responding by hosting models privately, restricting what data gets exposed to which systems, and building approval layers around AI tool adoption – all of which are sensible precautions, but all of which also slow the path to the kind of open, cross-domain intelligence true agentic systems require.
The result is the paradox currently playing out: Leadership mandates agentic AI, teams pursue it earnestly, and then, when it comes time to let the system make a consequential decision without human review, someone puts on the brakes. The trust infrastructure needed to feel confident in that moment simply isn’t there yet.
This is why the agentish tier of capabilities isn’t just where most organizations happen to be right now … it’s where most organizations should be right now.
Readily available solver and worker agents operating within deterministic, governed workflows allow organizations to build a track record. Every well-governed agentish deployment that performs reliably, produces explainable outcomes, and doesn’t leak data or produce unexpected results, is building the organizational trust that will eventually make the leap to true autonomy possible. You don’t earn the right to let go of the controls by throwing caution to the wind and deciding to trust the system. You earn it by building enough evidence, over enough time, that trust becomes the rational conclusion.
From agentish to agentic: Building a foundation for the future
The organizations that reach truly agentic first won’t be the ones that moved fastest. They’ll be the ones that built most deliberately.
For most organizations, that deliberate path begins with use case selection. Identify the high-volume, repetitive decision points within existing processes where an agent can take over a specific step with confidence and transparency. Build those deployments within deterministic, governed workflows so that outcomes are explainable and auditable from day one, and let their track record accumulate.
But beware that individual deployments will only get you so far. Without end-to-end orchestration – a layer that connects agents, people, processes, and systems into a coherent, governable whole – even truly agentic capabilities remain isolated pockets of efficiency. With it, they become the foundation of something limitless.
From there, the path to true agentic autonomy becomes less of a leap and more of a natural progression.
Want to hear everything else Craig LeClair and Niranjan Vijayaragavan talked about during their NintexConnect fireside chat? Listen to their discussion on-demand today.