AI Agents
PwC and Anthropic expand alliance for agentic AI: what changed, and what to watch
PwC says it will scale Claude Code and Cowork across its teams and deepen agentic builds for clients. The bigger story is how consultancies are becoming the distribution layer for AI operating models.
News
What PwC and Anthropic announced (May 14, 2026)
PwC and Anthropic announced a major expansion of their strategic alliance to leverage Claude across how PwC builds technology, executes deals, and reinvents enterprise functions for clients. PwC also framed this as an internal scale story: rolling out Claude Code and Claude Cowork, establishing a joint Center of Excellence, and training and certifying 30,000 PwC professionals on Claude.
PwC described three focus areas: agentic technology builds with engineering teams using Claude Code, AI-native deal-making across diligence and integration, and reinvention of enterprise functions such as finance, supply chain, HR, and engineering. The company positioned the "Office of the CFO" as the first at-scale expression of this work, including a Claude-native finance business group.
Context
Why consulting firms are becoming the distribution layer for AI agents
Enterprises don't adopt AI because a model is impressive in a demo. They adopt AI when it changes the economics of a workflow: fewer handoffs, shorter cycles, better controls, lower error rates, or faster software delivery. That takes more than a model subscription. It takes integration with systems of record, governance, and teams that can redesign processes.
That's where large consultancies and systems integrators are inserting themselves: as the layer that packages AI into deployable patterns. When these firms standardize training, tooling, and delivery playbooks, they effectively become a go-to-market channel for AI providers into regulated enterprise workflows.
Details
Three focus areas: build, deal-making, and enterprise functions
PwC's first theme is shipping software faster: Claude Code used by engineering teams to deliver production software with a growing portfolio of "agentic builds" across industries. This matters because enterprise AI ROI often shows up first in engineering velocity: modernization, integration, migration, and reducing backlogs.
The second theme is deals. PwC described agents working alongside deal teams across diligence, value creation, and integration, compressing the path from investment thesis to value capture. If true at scale, this shifts what diligence and integration work is economically viable in mid-market deals.
The third theme is enterprise function reinvention. PwC is talking about AI-native operating models, not assistive tooling: finance, supply chain, HR, and engineering. That implies new role definitions, control points, and review standards, not just new user interfaces.
CFO
Why finance is the first at-scale beachhead
Finance has a rare combination of properties that make it a natural early target for agentic workflows: highly repeatable processes, abundant structured data, clear definitions of correctness (reconciliation, variance, controls), and strong incentives to reduce cycle time. It's also an area where governance is non-negotiable, which forces the hard work of building auditability early.
PwC specifically called out use cases like journal entry work, variance analysis, and RFP workflows, and described working with Anthropic's own CFO organization to scale operations, controls, and international payroll. This "customer zero" pattern is important: it helps teams discover what breaks in production before packaging it for clients.
Tooling
Claude Code, Cowork, and MCP: why the toolchain matters
PwC's press release highlights Claude Code and Claude Cowork as named products, not just a generic model. That's a signal about where value is shifting: from raw chat to agentic toolchains that can read context, execute steps, and produce artifacts with traceability.
PwC also pointed to Anthropic's Model Context Protocol (MCP) as a way to connect agents inside productivity tools to enterprise data. For enterprises, the question is not whether an agent can call tools, but whether those tool calls are governed: identity, permissions, logging, policy checks, and separation of duties.
Impact
Claims to scrutinize: production stories and measurable outcomes
PwC said live deployments include examples such as insurance underwriting cycles compressed from ten weeks to ten days, HR programs turned around from stalled to prototype in one week, and incident response accelerated from hours to minutes. It also said clients are reporting delivery improvements of up to 70% across these deployments.
Treat these as directional signals rather than universally portable benchmarks. The key questions are: what workflows were targeted, what controls were added, what data access was required, and what failure modes emerged. Those details determine whether a story is replicable in your environment.
Playbook
A practical evaluation checklist for agentic AI partnerships
If you're considering a partner-led agentic AI rollout, ask for an architecture diagram and a control narrative, not just a demo. You want to see where the agent runs, what it can read, what it can write, and how actions are approved and logged.
Then require an evaluation plan. The pilot should define success metrics, failure classes, and a review cadence. For regulated functions, insist on audit trails, access reviews, and red-team style abuse testing before scaling.
Workflow scope: which steps are automated vs human-approved.
Data access: systems of record, permissions, and least-privilege design.
Controls: logging, approvals, segregation of duties, and rollback paths.
Evaluation: offline test sets, ongoing monitoring, and incident response.
Change management: training, role impacts, and accountability for outcomes.
Sources
Primary sources and further reading
For the official details, start with PwC's announcement. For broader context on Anthropic's enterprise partner strategy, see Anthropic's posts about its Claude Partner Network and enterprise services partnerships.
Related agent skill
Research Brief Agent Skill
A repeatable workflow for converting a complex topic into a clear research brief with assumptions, sources, argument map, risks, and next actions.
Free prompt pack
Get the prompt pack behind practical AI workflows.
Download 50 prompts for SEO, content, research, and business automation, then use them with this guide to make the workflow repeatable.
Free download
Get the prompt pack.
Choose your main interest and unlock the Markdown download.
Free during NEOA beta. You can download after submitting the form.
FAQ
Common questions
What does "agentic AI" mean in an enterprise context?
In practice, it means AI systems that can follow multi-step workflows and take actions through tools and integrations, not just generate text. In enterprises, the key requirements are governance: identity, permissions, audit logs, approvals, monitoring, and rollback paths.
Why would PwC roll out Claude Code and Claude Cowork internally?
Internal rollout creates repeatable patterns. When a firm standardizes training and tooling, it can deliver faster, discover failure modes earlier, and build a playbook that can be reused across client engagements.
What should a good agentic AI pilot include?
A good pilot targets one bounded workflow, uses real data, defines success metrics and failure classes, includes access control and audit logging, and has a review cadence. The pilot should prove that outcomes improve without weakening compliance or safety controls.
Is model choice the most important part of an enterprise AI rollout?
Model choice matters, but deployment discipline often matters more. Integrations, governance, evaluation, and change management usually determine whether an agentic rollout produces measurable outcomes or stalls after the demo phase.
Final recommendation
Make the workflow repeatable before you scale it.
If you're evaluating an agentic AI partnership, focus on the operating model: controls, integrations, audit trails, and the ability to ship measurable outcomes in weeks. Model choice matters, but deployment discipline matters more.