The Agentic Era Arrived. Enterprises Didn't: What Harvard and Stanford HAI Are Telling Us
86% of organizations plan to invest in autonomous AI agents. Only 6% fully trust them to operate without supervision. Only 15% have the data infrastructure to support them. This is not a technology problem.
By Sebastian Martinez, CEO of ITSense
There is a fundamental contradiction at the center of the enterprise AI conversation in 2026. Investment intent is at an all-time high: the vast majority of mid-market and enterprise organizations have autonomous AI agents on their roadmaps. Yet when executives are asked directly whether they trust those agents to make decisions without continuous human oversight, fewer than one in sixteen says yes.
This pattern is not new. In Edition #1 of Pulse AI by ITSense we covered the macro picture using Stanford HAI's 2026 AI Index: accelerating surface-level adoption paired with infrastructure and organizational readiness that consistently lags behind. What this week's research confirms is that the pattern has a sharper name: the agentic gap.
The data comes from two institutions that rarely converge with this level of precision — Harvard Business Review's research division and Stanford's Human-Centered AI Institute. Both published critical analyses between February and March 2026. Both point to the same structural problem from different angles. Read together, they form the clearest picture yet of where enterprise AI actually stands.
What an Agent Actually Is (and What It Is Not)
Before getting into the numbers, we need to define the term. "AI agent" is currently used to describe a wide spectrum of systems, and that ambiguity is part of the problem.
Stanford HAI defines agentic AI as systems that operate with genuine autonomy: they set their own sub-goals within defined parameters, use external tools (APIs, databases, calendars, enterprise systems), make sequential decisions, and adapt when conditions change. They do not wait for instruction at every step. They act.
The practical distinction matters:
| Type | Description | Example |
|---|---|---|
| Chatbot | Answers questions. Requires constant human input. | FAQ assistant on a website |
| Copilot | Suggests; the human decides and executes. | GitHub Copilot, writing assistants |
| Autonomous agent | Sets objective, executes multi-step workflows, acts without continuous supervision. | Agent that reviews, approves, and sends prescription renewals |
Most of what enterprises have deployed so far are copilots. Some are chatbots with better packaging. Real agents — those that run complete business processes without a human in the loop at each decision point — remain exceptional. The case that most clearly illustrates that leap happened in Utah in January 2026.
The Utah Case: First Agentic Pilot at Scale in Healthcare
In January 2026, Utah's healthcare system deployed what is documented as the first large-scale autonomous AI agent handling medical prescription renewals. The agent did not suggest renewals for a physician to approve one by one. It reviewed patient clinical history, checked contraindications, consulted updated clinical guidelines, and issued the renewal directly when criteria were met. Physicians intervened only on cases the system escalated based on risk criteria.
This was not a lab experiment. It was an operational change in a regulated, high-stakes environment.
The Utah pilot matters for two reasons. First, it dismantles the argument that agentic autonomy is premature or only viable in low-risk industries. Second, it demonstrates that the bottleneck was never the technology: it was the months of prior work on structured clinical data, escalation protocols, and decision criteria definition. The technology was the last piece in. Governance and data infrastructure came first.
That connects directly to what Harvard found when they asked the broader enterprise world about their readiness.
The Infrastructure Gap: What Harvard Found
HBR Analytic Services, in partnership with Reltio, published a March 2026 study on enterprise readiness for the agentic era. The findings deserve to be stated without softening:
- 94% of surveyed executives are exploring or deploying AI in their organizations.
- 86% plan to increase investment in autonomous agents over the next two years.
- Only 6% fully trust those agents to operate without continuous supervision.
- Only 20% consider their technology infrastructure ready to support agents at scale.
- Only 15% say their data is in a condition to reliably feed agentic systems.
- 94% rank data quality as critical to their AI initiatives — yet only 39% rate themselves as proficient in data management.
A companion study from HBR and Cloudera adds another dimension: 65% of executives expect AI agents to augment or replace complete business processes within two years. Among that same group, only 7% believe their data is ready for that scenario.
Let that land for a moment: two out of three companies anticipate deep agentic transformation within 24 months. Fewer than one in fifteen has the data to support it.
"The agentic era has arrived. The average enterprise has not." — Pulse AI by ITSense
This is not a reason for alarm. It is an honest diagnosis that enables a useful response. And the useful response does not start with buying more technology.
The Role Nobody Has Yet: Agent Manager
In February 2026, HBR published "To Thrive in the AI Era, Companies Need Agent Managers". The core argument is straightforward but carries deep implications: when agents are genuinely autonomous, someone in the organization must be accountable for their performance, their boundaries, their alignment with business objectives, and their interaction with other agents and with humans.
That role does not formally exist in almost any enterprise today.
An agent manager is not the developer who deploys the system. It is not a prompt engineer. It is someone who understands the business process the agent executes, grasps the risk profile of each autonomous decision, defines escalation criteria, monitors behavioral anomalies, and responds when the agent fails. It is, in essence, the person responsible for a team member who has no HR file.
In March 2026, HBR extended this with "To Scale AI Agents Successfully, Think of Them Like Team Members". The proposed framework is direct: agents need onboarding (correct data, appropriate tools, process context), performance evaluation (outcome metrics, not just activity), and leadership (someone making decisions about their evolution).
Organizations successfully scaling agents are not doing so because they have better technology. They are doing so because they design human-agent teams with clear responsibilities and defined decision chains.
What This Means for Latin America: The Window Is 2026-2027
The Stanford HAI 2026 AI Index documents something that deserves particular attention for emerging markets: autonomous agent deployment is advancing fastest in sectors with structured data and repeatable processes — healthcare, financial services, logistics, professional services — rather than in sectors with high operational variability.
Latin America has a significant concentration of mid-market companies in exactly those sectors. The relevant question is not whether agents will reach the region. It is whether companies in the region will be ready when competitive pressure makes them non-optional.
The preparation window we see is 2026-2027. Not because there is a magic date, but because data and infrastructure transformation cycles take 12 to 18 months from decision to reliable operation. Companies that begin working on data quality, governance, and agentic process design in the second half of 2026 could be operating real agents by late 2027. Those that wait until it is urgent will be starting in 2028, trying to close a gap that will already be visible in their results.
The Utah pilot was not a lucky technological strike. It was the outcome of a decision made 18 months earlier about how to structure clinical data.
What to Do This Week
No three-year roadmap. Four concrete actions any leadership team can initiate immediately:
-
Audit one critical process through an agentic lens. Choose a process that today requires repeatable, data-driven decisions. Ask: if an agent executed this process, what data would it need, what decision criteria would it follow, when would it escalate to a human? That exercise reveals more about real readiness than any technology assessment.
-
Evaluate the quality of the data that process consumes. Not quantity — quality. Is it complete? Current? Does it have a single source of truth? Is it accessible via API or does it require manual extraction? The answers define the actual roadmap.
-
Identify who would be the agent manager for that process. Not the developer. The person accountable for the business outcome. If there is no clear answer, that is critical information about an organizational gap.
-
Read both HBR articles cited in this edition. Not as trend-watching. As input for a leadership conversation about where the organization stands today and where it needs to be in 18 months.
Closing: This Week's Radar
- Stanford HAI documents agents that accurately simulated 85% of the responses of more than 1,000 real people in controlled environments. The boundary between simulation and real-world action continues to narrow.
- The Utah prescription renewal pilot is the benchmark reference for regulated sectors looking to make the case that agentic autonomy is viable today.
- The trust gap (86% will invest / 6% trust) will not close with better technology. It closes with better data, better processes, and better governance structures.
- The agent manager role will appear in formal org charts before 2027 in companies leading adoption. Thinking now about who fills it is a real competitive advantage.
- Latin America has the opportunity to learn from the implementation mistakes already documented in markets that moved faster. That is a genuine structural advantage — if it is used with intention.
At ITSense we work with leadership teams across Latin America navigating exactly this transition: from exploring AI to operating AI. If you want to understand where your organization sits on this curve and which steps carry the highest return in your specific context, let's talk.
Sources cited:
- Stanford HAI. What Is Agentic AI? https://hai.stanford.edu/ai-definitions/what-is-agentic-ai
- Stanford HAI. AI Index Report 2026. https://hai.stanford.edu/ai-index/2026-ai-index-report
- HBR Analytic Services + Reltio. Enterprise AI Readiness Study, 2026.
- HBR + Cloudera. Agentic AI Adoption Study, 2026.
- Harvard Business Review. To Thrive in the AI Era, Companies Need Agent Managers. February 2026. https://hbr.org/2026/02/to-thrive-in-the-ai-era-companies-need-agent-managers
- Harvard Business Review. To Scale AI Agents Successfully, Think of Them Like Team Members. March 2026. https://hbr.org/2026/03/to-scale-ai-agents-successfully-think-of-them-like-team-members
Pulse AI by ITSense — Sector Radar, Edition #02 — April 21, 2026 Subscribe to Pulse AI to receive each edition directly in your inbox.