
External Audit’s Non-Reliance on AI: Work-in-Progress or Emerging Double Standard?
Every Big Four firm is making massive investments in AI.
External Audit’s Non-Reliance on AI: Work-in-Progress or Emerging Double Standard?
They’re using it in their own work.
They’re building AI-enabled technologies that they’re using internally and often selling for client use.
They’re providing AI-related services. Need help securely deploying AI, establishing AI governance, or upskilling employees in using AI? They can do it. Need to develop AI capabilities to innovate business processes, enhance productivity, or reduce costs? They can do it.
But there’s one AI-related activity that most Big Four auditors won’t do. For the most part, they’re not going to rely on your organization’s use of AI in their External Audit work.
This conundrum keeps coming up in Internal Audit Collective discussions. It can feel frustrating.
Every profession — including Internal Audit — is getting the loud-and-clear message that staying relevant, providing value, and keeping our jobs requires us to get onboard with using AI.
But if we use AI to support a process or control being audited, External Auditors generally won’t rely on it without significant additional work on Internal Audit’s part.
This is a new challenge, and it will evolve. But it’s worth tracking what Internal Audit leaders are seeing, how they’re responding, and what they’re expecting in the near term.
During a roundtable of ~20 Internal Audit leaders of SaaS-based companies hosted by CAE and Internal Audit Collective member Sarah Hansen, the group spent significant time sharing the challenges they’d experienced around external auditor reliance on AI. Here are some key takeaways. (Note: Given the sensitive nature of External Audit relationships, all quotes are anonymized.)
1. You Must Prove the Human in the Loop
Roundtable participants compared the limited guidance their External Auditors have provided. So far, it’s been general, limited, and mainly focused on ensuring the human in the loop.
Said one Internal Audit leader, “We just started to socialize [our AI use] with our Big Four auditor. How do you get them onboard? They’re saying, ‘There’s got to be a human reviewing — you just can’t rely on AI at this point in time.’ But that’s all the guidance they’ve given us so far. I think that they’re expecting that someone at my company, like the control owner, is still performing some sort of manual review.”
“You have to really make sure you’re showing that they're reviewing it, too,” added an IT Audit Manager. “Some of the concerns we heard from our Big Four auditor were on use cases of using AI to help annotate support. And one of the things they’re very cautious about right now is making sure that AI didn't alter the documentation being annotated to match the value we want it to be. So it’s making sure of what the AI model can do.”
Everyone on the call agreed: Human auditors will always need to perform a certain level of review over AI. As one leader put it, “Whether it’s AI or a staff or a third party, that review component needs to be done. There needs to be validation of the output that’s produced. You can apply the same mindset — it’s the same controls we would expect to see for anything.”
The group also acknowledged that both upfront and ongoing validation will be important, given how AI models grow and evolve.
But the question remains: What specific controls are needed, and at what levels?
This all begs another very compelling question, posed by another leader during the discussion: “I’m wondering what they’re all doing internally to get comfortable, because they’re making massive investments in leveraging AI across all of the Big Four firms. What are they doing to get comfortable as they’re incorporating more AI usage into their audit procedures?”
Another CAE responded, “It’s obviously a PCAOB focus area, right? So they’re going to be conservative. But it’s fair to push on that, because if they’re using AI, then you can’t say that and not expect your clients to use it. That’s really not realistic.”
2. Instead of Saving Hours, Many Teams Are Shifting Hours
One CAE observed, “How they’re getting comfortable [with auditees’ AI use] is that there’s a human element reviewing the work papers. But is that really great from a control perspective? We're building all this automation, yet we have to monitor it in a manual way.”
Another CAE recounted his experience. He was talking with his Big Four External Audit partner, explaining that his team had come up with some ideas on using AI for testing. The CAE had already challenged them on how they would know the AI’s outputs were correct, and he’d come away thinking they had some solid use cases. But when he ran them by the External Audit partner, “She didn’t have a good answer. Her answer was basically, ‘I think you’re going to have to do reperformance of certain things.’ And I said, ‘In what fashion? As in, how many transactions will we have to reperform?’ And then, ‘Does it even make sense?’”
The exchange left him wondering: “Why am I saving hours with AI to add them back later? We’re not really getting the savings we should be.”
Another Internal Audit leader agreed wholeheartedly. She said, “It seems like we’re shifting the hours from the front end and control testing to review across all the work streams. It’s not necessarily a true savings just yet. So I’m looking forward to seeing how we can use it to have a significant impact versus a shifting of hours to other parts of the audit.”
3. A Degree of Reliance Is Possible — But it Takes Work
A CAE shared an experience in which their Big Four External Auditor eventually got somewhat comfortable with one aspect of the organization’s AI use: The company’s Head of Revenue had been using a third-party AI for revenue contract review. There wasn’t 100% reliance, but it was a start.
To ensure the human-in-the-loop control point:
- The team had to provide the AI vendor with feedback that their SOC report was light on testing around a critical control point. The vendor needed to ramp up testing in that area and strengthen their SOC report.
- The team also had to prove that the revenue team’s control owner had appropriate monitoring controls in place. In particular, the External Auditor looked very closely at the software development lifecycle (SDLC) and user acceptance testing (UAT), looking at signoffs, scenarios that failed, and how fixes were implemented. Essentially, PwC got comfortable enough that sample-based management controls would detect if anything was getting significantly off the straight and narrow.
This successful reliance example came up again when the team talked about how hours were shifting (point #2).
The CAE explained, “While we wait for the SOC report to be enhanced, management ramped up the number of samples that they were reviewing as part of their controls. Technically we shifted to a manual control approach for revenue contract review. But the impact is that the substantive testing went way up for our External Auditor. So, once we can bring the AI-enabled application back in for reliance — which we expect to do as soon as we get the new SOC report — the substantive testing will go back down. So, something to think about from a savings perspective. Maybe you're not seeing significant savings in terms of management's hours to execute controls. But your risk coverage is greater, so your audit fees might be lower.”
4. External Auditors Will Probably Wait on the PCAOB
Observed one CAE, “I think there will be an evolution on this. Meaningful adoption will require the PCAOB to establish standards for how External Auditors can rely on AI-enabled control and audit processes while implementing appropriate validation procedures beyond traditional manual testing.” The executive envisions Internal Audit and control functions evolving to an AI-assisted model with human oversight focused on exceptions within the next three to four years.
Another CAE chimed in, “I don’t disagree. But I would add that there probably needs to continuously be some sort of ongoing validation that the model is still operating as intended — that it hasn’t gone astray and created hallucinations or anything like that. I don’t think you’ll need to review every single transaction. But is there an annual or semi-annual kind of UAT done to validate that the model’s still doing what you expect it to do?”
In the meantime, many of these leaders’ organizations are showing an increased appetite for governance over internal use of AI. They’re increasingly viewing AI as an emerging risk from an ERM perspective, so the conversation is getting more board- and management-level attention.
In many cases, this is helping to elevate critical conversations around data governance, quality, security, and controls; training employees in responsible AI use; and getting effective governance in place around AI use and AI tool selection and implementation.
We Can Do It With AI — or Can We?
KPMG’s website proclaims, “You can with AI.”
Deloitte’s invites you to use AI and engineering to “transform the heart of your business across products, platforms, and operations.”
EY’s affirms, “We embed leading-edge AI capabilities into everything we do for you.”
PwC’s asserts, “We help you define what success looks like, reimagine business models and build AI solutions at scale — with expertise you can rely on, insights that matter and technology that delivers results you can trust.”
Except maybe you can’t trust those results? Because even though they’re claiming they can help you build “technology that delivers results you can trust,” most External Auditors aren’t ready to trust those results — yet.
Again, this is a new and evolving situation. There does seem to be a degree of team-level consideration, with some External Audit partners tentatively willing to explore reliance options.
But the bottom line isn’t budging: Internal Auditors and External Auditors have a lot of work to do to figure out how to audit AI effectively. Let’s keep the conversation going.
The Internal Audit Collective believes that reliance will increase as guidance increases. However, we also believe that these challenges will continue until External Auditors are compelled to find solutions. In fact, that’s a key motivation behind our AI working groups. If you want to be able to use AI to meaningfully improve the quality and efficiency of your SOX work, consider joining the Internal Audit Collective and supporting these efforts. We are developing AI use cases that we will publish in the fall.
If more SOX teams push their External Auditors to consider relying on valid AI use cases, External Auditors are more likely to help us make it happen sooner.
When you are ready, here are three more ways I can help you.
1. The Enabling Positive Change Weekly Newsletter: I share practical guidance to uplevel the practice of Internal Audit and SOX Compliance.
2. The SOX Accelerator Program: A 16-week, expert-led CPE learning program on how to build or manage a modern & contemporary SOX program.
3. The Internal Audit Collective Community: An online, managed, community to gain perspectives, share templates, expand your network, and to keep a pulse on what’s happening in Internal Audit and SOX compliance.