
How to Build Your Business Case for AI in Internal Audit
To use AI, you need a yes. Here’s how to get it.
How to Build Your Business Case for AI in Internal Audit
Generative AI is transforming how work gets done. For Internal Audit’s work and role to stay relevant and valuable, figuring out how to use AI isn’t just a goal, it’s an imperative.
Internal Auditors must harness AI’s potential while managing its risks. That’s why the Internal Audit Collective has three working groups focused on AI: Ben Sady’s is building a detailed AI governance playbook for Internal Audit, Alan Maran’s is developing AI use cases for Internal Audit, and Casey Atwater’s is working on AI use cases for SOX.
But there’s been a persistent theme in many of these conversations. Many Internal Audit and SOX teams are still stuck in the AI starting blocks.
It’s not that they don’t want to use AI. In some cases, they just don’t have the company’s permission or budget to do it.
Our whole industry is pushing us to use generative AI more. But there isn’t a ton of conversation around the reality that a lot of people are still struggling to get buy-in to use AI in the first place.
Concerns about AI costs are a common hurdle. But most teams’ and companies biggest hurdles come from fear or uncertainty around AI risks like data privacy/security, compliance, inaccuracy, ethics, liability, IP infringement, and other areas. There’s also a learning curve many teams aren’t thinking about.
AI’s risks are monumental. But so are its opportunities. Plus, organizations that don’t embrace AI actually create another set of risks.
Because their employees and competitors will use AI, whether or not policies or budgets allow it.
Plus, high-performing employees — eager to expand their skills — may leave for more AI-friendly pastures.
So, how can Internal Audit and SOX teams build a business case for AI that effectively addresses these concerns? We interviewed several Internal Audit leaders to get some answers.
Overcoming AI Budget and Cost Concerns
AI costs come in two general categories: (1) licenses for AI technologies and (2) money and time spent training team members to use AI technologies effectively.
First, the good news: AI licenses may cost less than you think.
We polled the Internal Audit Collective about average costs for enterprise licenses that safeguard internal data and include agentic AI capabilities. Here’s what we found:
- Microsoft Copilot. $25–$42 per user per month.
- Google Gemini. $20–$30 per user per month, depending on bundling with other Google Workspace services.
- OpenAI ChatGPT. Customized pricing, but a reasonable estimate for planning purposes is $60–$100 per user per month. Costs vary based on enterprise-grade features such as administrative controls, security, privacy isolation, and dedicated performance.
Tips for Overcoming AI Budget and Cost Concerns
1. Begin By Building on What You Have
Don’t let perfect get in the way of good.
If you’re a Microsoft shop, ask for Copilot. If you’re on Google Workspace, ask for Gemini.
You’ll face less resistance by starting with any AI that can be easily integrated as an add-on to existing enterprise technologies.
2. Start Small and Scale Over Time
Many C-suite leaders hesitate because they know giving one team access means other teams will come calling. With that in mind:
- Keep your initial ask small. Most organizations start with small-scale deployments (10–50 users) to pilot AI use cases and get real-world feedback before scaling. One Internal Audit leader started by asking for licenses for just over half his headcount for a six-month pilot. That way, they could use their initial successes to demonstrate ROI and justify the cost of licensing the entire team.
- Make your pilot program useful to the entire organization. For example, commit to using lessons learned from your implementation to help the organization prove ROI and help other teams avoid implementation challenges.
3. Educate Through Experience and Experimentation
Many teams also lack the budget or bandwidth to invest in specialized AI training.
Fortunately, as CAE Alan Maran’s experience has shown, “The real barrier isn’t technical skill. It’s mindset. And that’s good news, because it’s something you can shift with a little structure and the right culture.”
Internal Audit leader Alejandro Anievas shares this view. Said Alejandro, “We don’t need to be data scientists to understand the uses, benefits, and risks of AI. Auditors are a natural fit for AI adoption. We’re trained to understand processes, pick out inefficiencies, and assess risk. Those same skills qualify us to identify where AI can add value and see where using it can be risky. We don’t need to be experts — we just need to apply the inherent curiosity our profession is known for.”
Alan said, “What worked well for us was getting people to use AI in the context of their actual work. Instead of starting with theoretical training, we had team members prompt the tools to help them write process narratives, summarize walkthroughs, and draft SOX control descriptions. It wasn’t about making them AI experts. It was about building comfort and confidence.” He continued, “We reminded everyone regularly that the goal wasn’t perfection. The value comes from acceleration and iteration… At the end of the day, the most effective AI users weren’t the most technical people. They were the ones who stayed curious and kept trying new things.”
Overcoming AI Risk Concerns
It’s only natural to fear what you don’t understand. Fear of AI risks most often comes down to a simple lack of education.
Fortunately, that means many teams can use education to overcome fear-based resistance or hesitation. The need is to help stakeholders more clearly see and understand AI’s threats and opportunities across the business, as well as the controls they can use to mitigate the threats and capitalize on opportunities.
Tips for Overcoming AI Risk Concerns
1. Arm Yourself With AI Education
The goal is to educate yourself, your team, and your organization, in that order.
This primarily involves investing time, not money. Because there are plenty of free resources out there, and you can actually get yourself up to speed pretty fast. For example, you should:
- Understand how generative AI tools work. Simply understanding that LLMs don’t draw from databases of facts — that they’re only guessing at right answers, and can easily be wrong — is a crucial foundation for safe, effective AI use. You’ll also understand why the “human in the loop” will always be essential. YouTube can help.
- Have a good grasp on use cases. That’s why the Internal Audit Collective’s AI-focused working groups, roundtables, and training programs exist! But there are tons of discussions and events you can join. For example, founder and The Audit Podcast host Trent Russell’s organization, Greenskies Analytics, hosts an Audit Analytics & AI Conference. Register here.
- Have a detailed understanding of AI risks and mitigating controls. Several frameworks help define AI risks and considerations, including NIST AI RMF, ISO/ICE 42001:2023, the EU AI Act, the MIT AI Risk Repository, and the Cloud Security Alliance’s AI Controls Matrix.
- Have a foundational understanding of AI governance. The ISO/IEC 38507:2022 framework is a good place to start. So is the Internal Audit Collective, as Alan’s working group builds out and shares an AI governance playbook specific to Internal Audit. (More to come!)
2. Make AI a Regular Part of the Conversation
When more people are talking about AI, more people are aware of its risks, use cases, and how they can use it responsibly.
One Internal Audit Collective member said his team invests 30 minutes to an hour during every staff meeting to understand how people are using AI, their concerns, and so on. He also regularly raises the topic with business stakeholders, sharing his team’s successes and lessons learned and learning about their experiences.
3. Perform an AI Governance Review
In some organizations, however, C-suite leaders just aren’t budging on permitting AI, or moving incredibly slowly. An AI governance review can help you address their risk concerns and provide assurance that AI risks are being managed appropriately.
At a high level, the scope of an AI governance review includes understanding:
- Data governance
- Impact on ITGCs
- Third-party risk management (TPRM) considerations and impact
- Entity-level governance
- AI governance ownership (e.g., roles and responsibilities, overseeing pilot exercises)
- Expected policies and controls
An Internal Audit Collective member and CAE at a manufacturing company — a notoriously slow-to-adopt industry — is performing an AI governance review later this year. Facing staunch C-Suite resistance, it’s her team’s best hope for finding a path forward for using AI.
4. Demonstrate How Action is Balanced With Caution
Again, once you better understand how AI works, you’ll better understand how essential the human in the loop will always be. Emphasizing how AI systems and governance embed human oversight and other safeguards is a key way to strengthen your business case.
As the manufacturing CAE said, “Continue to remind your senior leadership that you're not asking AI to do anything without human interaction. That's an absolutely essential piece of it, and there's no substitute for that human intervention. If they can get comfortable with the fact that any company-level data going through the tools or models is safely contained, that gives you another point to help convince them that AI can be a genuinely useful tool.”
5. Listen, Validate, and Persist
Take the time to understand stakeholders’ concerns. Dispelling myths and misconceptions requires facing them head on. Validate that the risks are real, and then share how specific controls (e.g., policy, contract, SOC reports, ITGCs) and continuous monitoring activities can help mitigate key AI risks.
In certain organizations and industries, AI implementation and governance will be a very, very long game. Keep beating the drum. You may have more success if you…
6. Focus on AI’s Opportunities
In a recent AuditBoard webinar, Trent made a key point: Business leaders would much rather talk about opportunities than risks. So why not lead with AI’s opportunities?
Your key AI opportunities will depend on your organization’s industry, business model, strategic objectives, risk profile, and other variables. But AI offers pretty much everyone key opportunities in:
- Automating high-time, low-value administrative tasks. Every organization wants to get more work done using fewer resources. AI tools can serve as fast, eager-to-please “staff” who can build foundations for human Internal Auditors to review, refine, and build upon.
- Gaining competitive advantage. AI can help organizations innovate, drive more intelligent insights, align with consumer or investor expectations, and otherwise differentiate themselves.
- Attracting and retaining top talent. High-performing team members want to be equipped to be the Internal Auditors of the future. AI is an integral part of that picture.
- Strengthening risk management and monitoring. AI tools can help increase audit and testing coverage, support continuous risk monitoring, and drive data-based insights that improve decision-making.
Building Your AI Business Case
So, with all of these considerations in mind, what can Internal Audit and SOX teams do to ensure that their pitches will land?
As Alan pointed out, your pitch is either where your AI conversation stalls or gains traction.
So you’ll want to make sure your business case (1) directly addresses stakeholders’ likely concerns about costs and risks (see all of the above) and (2) lays all the foundations needed for your team to securely and safely deploy AI.
Tips for Building Your AI Business Case
1. Involve All the Right Stakeholders — Early
Bring your IT, security, and legal partners into the conversation ASAP. They’ll help you identify risks and make sure your rollout aligns with enterprise policies. As Alan said, “This positions you as a thoughtful, forward-looking partner, not just someone chasing the latest AI technologies.”
If your organization has a dedicated AI team, make sure you’re aligned out of the gate. Work with them to understand available technologies and policies and prioritize your use cases, goals, and needs.
2. Focus on Outcomes, Not Tools
Alan said, “The key is to stop talking about tools, and start talking about outcomes. Executives and audit committees want to hear that AI will help the team reduce testing time, improve coverage, and accelerate decision-making.”
Proving that out means honing in on specific use cases, outcomes, and impacts. How will AI help your organization create, enhance, sustain, and protect value?
“We’ve had the most success when we start with a specific use case, apply AI to it, and then show the before and after,” said Alan. “For example, we might take a routine SOX process like travel and expense or user access reviews, apply AI to the documentation and summarization tasks, and compare the time and consistency of the results. When you bring that kind of tangible output to the table, it becomes much easier to win support.”
Make sure use cases focus on problems worth solving, and prioritize use cases strategically (e.g., quick wins vs. long-term value, alignment with overall strategy). Determine how you’ll measure success.
3. Embed a Plan for Data Availability
Make sure your business case includes a plan for improving data availability.
“Data availability is the real unlock and part you cannot skip. If your data isn’t accessible, AI becomes more of a novelty than a game-changer,” explained Alan. “Someone on your team, or a partner team, needs to know where the right data lives. You can start with manual pulls and ad hoc access. But if you want to scale, you’ll need to move toward APIs, integrated systems, and automated feeds. The goal should be continued integration that supports continuous auditing and broad coverage across all relevant transactions.”
Without this foundation, cautions Alan, the promise of AI can’t be realized. Accordingly, “Your business case should include a plan for improving data pipelines just as much as it includes the tools and training.” Speaking of which…
4. Include a Plan for Upskilling Your Team
Again, your business case needs to supply the foundations needed for secure, safe AI use. That means accounting for how you’ll mature your team’s AI skills and capabilities.
Formal training courses and certifications are one option. But you can do a lot internally, and anyway such efforts are often better-suited to your specific use cases. For example, you can:
- Establish standard AI operating procedures
- Hold regular sessions sharing AI use cases or engaging in task-focused experimentation
- Conducting prompt engineering sessions
- Building an AI prompt library or other AI knowledge repository
- Enlisting external SMRs to provide training on specific AI topics
Get Moving — or Get Left Behind
Internal Audit adoption of AI is increasing fast — much faster than data analytics adoption, which has been slow rolling for the past 25 years. A recent Internal Audit 360 article highlighted a Wolters Kluwer finding that 39% of Internal Auditors are already using AI, and another 41% intend to adopt AI within the next year.
Since we’re all auditors here, I don’t need to help you do the math on that. I’ll just remind you what that math means: If your team is still waiting on the sidelines, you will get left behind.
When it comes to building bulletproof business cases for using AI in Internal Audit or SOX, we don’t have all the answers yet. But we’re going to keep doing the work to find as many answers as we can. Again, look for upcoming Internal Audit Collective eBooks on AI use cases and governance.
And in the meantime, here’s my best advice, piling on everyone else’s excellent advice:
Lean in. Get curious.
Take a class or two. Start experimenting, and get disciplined about pursuing one or two use cases. Better yet, get involved in Ben, Alan, or Casey’s AI working groups by joining the Internal Audit Collective today!
Whatever you do, don’t let perfect get in the way of good. Just get started already.
When you are ready, here are three more ways I can help you.
1. The Enabling Positive Change Weekly Newsletter: I share practical guidance to uplevel the practice of Internal Audit and SOX Compliance.
2. The SOX Accelerator Program: A 16-week, expert-led CPE learning program on how to build or manage a modern & contemporary SOX program.
3. The Internal Audit Collective Community: An online, managed, community to gain perspectives, share templates, expand your network, and to keep a pulse on what’s happening in Internal Audit and SOX compliance.