PEX Network’s key takeaways:
- Puneet Thakkar, finance process and systems transformation lead at Google, explores why generative artificial intelligence (AI) has stalled inside modern organizations.
- Businesses are awash in pilots and proofs of concept, yet few can point to sustained bottom-line results
- Traditional software and governance models are failing in an agentic world.
Generative AI has crossed a critical threshold in the enterprise. While experimentation is nearly universal, meaningful business impact remains elusive.
Organizations are awash in pilots and proofs of concept, yet few can point to sustained bottom-line results. This growing disconnect has given rise to what many leaders now recognize as the generative AI paradox: rapid adoption without corresponding return on investment (ROI).
As market sentiment shifts, enterprises are being forced to confront a hard truth – incremental automation and scattered experimentation are no longer enough.
In this interview, Puneet Thakkar, finance process and systems transformation lead at Google, explores why generative AI’s promise has stalled inside modern organizations and what it will take to move from hype to execution.
The conversation unpacks why traditional software and governance models are failing in an agentic world, how new architectures enable autonomous, end-to-end workflows, and why security and data strategy (not inference costs) are the real barriers to scale. Most importantly, it offers a practical blueprint for leaders looking to escape pilot mode and build a truly self-driving enterprise.
You can hear more from Thakkar when he speaks at All Access: Future of BPM 2026 on February 10!
PEX Network: What is the generative AI paradox and what impact does it have in modern organizations?
The era of “letting a thousand flowers bloom” is over. The market is no longer rewarding scattered experimentation; it is rewarding disciplined, top-down execution.
The paradox exists because organizations are stuck in ‘pilot mode’ – using AI to automate tasks (like summarizing emails) rather than re-engineering the process. We are effectively ‘bolting Ferrari engines onto horse carts.’ Until we stop using advanced AI to speed up legacy workflows, the paradox will persist.
PEX Network: Why has generative AI adoption outpaced demonstrable business impact compared to prior enterprise technologies?
PT: It comes down to the difference between ‘horizontal’ adoption and ‘vertical’ impact. We have seen massive horizontal spread – employees ‘playing with frameworks’ and building their own simple agents. This creates ‘wow’ moments, but not ‘work’ moments.
The real ROI comes when companies pick five to seven use cases’ (like claims processing or order-to-cash) and drive adoption top-down. However, this ‘vertical autonomy’ has lagged because the architecture wasn’t ready. You cannot build complex, vertical agents if they cannot speak to each other.
We are now solving this with agent-to-agent protocols, allowing agents to coordinate across platforms (e.g. ERP to CRM) without human routing.
PEX Network: Is the ROI gap primarily a measurement problem, an execution problem, or a capability maturity problem?
PT: It is an execution architecture problem. We are trying to build autonomous systems using the traditional software development lifecycle (SDLC), which acts like a linear relay race.
To close the gap, we need a new blueprint: the agentic SDLC. This framework shifts execution from a linear line to a continuous loop. It enables agents to access unstructured data (the ‘big unlock’ of generative AI) and use it to drive ‘dynamic synthesis.’ We don’t need better measurement; we need an agentic platform that is future-proof and compliant from day one.
Register for All Access: AI in PEX 2026!
PEX Network: How should companies think about the true cost of generative AI, including inference, integration, governance, and change management?
PT: The hidden cost isn’t inference – it is security and governance. AI introduces a ‘new surface area for attack.’ If your data isn’t secure where it lives, your agent is a liability.
Companies must shift investment toward ‘AI as judge.’ This means building automated guardrails and test-driven development (TDD) protocols. We must recognize that there is no AI strategy without a data strategy. The true cost is building the ‘immunology’ of the enterprise, ensuring an agent cannot execute a transaction unless it passes strict, mathematical security tests.
PEX Network: What metrics should leaders use instead of (or alongside) traditional ROI to assess generative AI investments?
PT: Stop measuring hours saved. Instead, measure process autonomy level and workforce bilingualism.
- Process autonomy: On a scale of one to five, how much of the end-to-end workflow happens without human intervention?
- Bilingual talent density: Google leadership has identified that the companies of the future need a bilingual workforce – employees who understand their domain (e.g. finance) and know how to use AI. In my framework, I call this the ‘context engineer.’ These are the people who can translate business logic into secure agentic architectures.
When you measure autonomy and talent density, you stop building pilots and start building a self-driving enterprise.