A brave new world: AI governance and internal audit
This is the thirteenth installment in OCEG™'s expert panel blog series, showcasing the accomplished professionals from OCEG™'s Solution Council member companies and giving you direct access to the industry leaders who shape our standards and drive innovation in governance, risk, and compliance. Through these insights, you'll discover the connections and expertise available through your OCEG™ membership. In this post, Richard Chambers from AuditBoard explores why understanding AI fundamentals, from LLMs to the three types of organizational AI use, is critical for internal audit teams, and provides a seven-step framework for building responsible AI governance that balances innovation with risk management in a landscape where only 25 percent of organizations have fully implemented AI governance programs.
The AI revolution is well under way, and most experts believe we are just at the beginning. AI implementation within organizations is only expected to grow. In The state of AI: How organizations are rewiring to capture value from McKinsey & Company, 78 percent of survey respondents said their organizations adopted AI for at least one business function. Responses also show that AI is most often used in sales and marketing, software engineering, and service operations.
AI comes with its own risks, but I also believe that abstaining from AI usage entirely is far riskier. Understanding AI and implementing responsible AI governance is the better option that can help you maintain a level playing field while mitigating risks and ensuring proper compliance.
Artificial intelligence: The fundamentals
I always think it is good to understand the basics before you start, especially with something as nuanced as AI. It seems like no other two-letter combination has created as much confusion and misunderstanding as “AI”. That only gets exacerbated by organizations and the general public misusing the term, creating a vicious cycle.
LLMs: The heart of current AI
Most current AI tools involve large language models (LLMs). LLM companies essentially scrape the internet for all its vast mountains of words, numbers, and thoughts and load that into their tools. When you ask an AI tool a question, the LLM tries to predict the next word or series of words that will potentially answer your question or prompt.
While these companies filter out some of the bad stuff, the imperfect nature of LLMs means you still end up with some of the bad stuff. That means that, along with all the actual facts, you get a mix of opinions, bad advice, and sometimes downright false information. That is why organizations need human safeguards in place to review anything generated by AI.
Therein lies the role of responsible AI governance, but it is severely lacking. From McKinsey & Company’s report, only about 27 percent of respondents noted any form of employee review of AI content before usage.
Three types of AI
While LLMs form the foundation of modern AI systems, there are three variations of AI most commonly used within organizations.
- Bring your own AI (BYOAI) - With BYOAI, your employees use their own external AI tools for company business. This typically means ChatGPT and other widely available LLM services or personalized chatbots.
- Embedded AI - Many services now have their own AI tools embedded in their software. These include Google Gemini, Microsoft Copilot, and Salesforce Einstein, to name a few.
- Built and blended - More companies are investing in their own enterprise-level AI solutions that are either built from the ground up or blended from existing AI tools.
Each of these types of AI comes with its own challenges and risks, though I believe embedded AI poses the biggest near-term challenge for AI governance teams due to its sheer prevalence and lack of oversight. Gartner predicts that over 80 percent of independent software vendors will embed generative AI in their enterprise applications by 2026.
How to get started with your AI governance
If you, like many other business leaders, are not sure where to start, here is a step-by-step guide to navigate the beginning stages of AI governance.
1. Assemble a cross-functional committee.
Instead of relying on any singular person or team, create a committee comprising members from your whole organization. Your committee should demonstrate all forms of expertise, including legal, product, technical, technological, and ethical.
This also involves building a charter that allows the committee to influence company products and processes through policy changes.
2. Create and publish AI ethics principles.
Create a statement of values, principles, and ethics for AI usage that aligns with your organization’s corporate values. Publish these internally, and be open to ongoing discussions about these principles.
3. Inventory all algorithmic AI systems.
Create an inventory of all the automated algorithmic systems (both internal and otherwise) used by your organizations. That includes AI embedded in applications, as well as BYOAI tools used by employees.
4. Deploy a first round of policies and procedures.
Create a risk review process and establish rules for when that process gets triggered. Conduct a risk assessment for all the AI tools that you have inventoried, recording and ranking each tool based on their potential risk.
5. Connect causes to harms.
Look for gaps, and consider broader socio-technical systems. Many causes hide where products interface with humans, and vice versa.
6. Prioritize harms.
Prioritize harms based on their frequency or likelihood, along with the stakeholder groups and causes. Estimate the potential risk of each harm.
7. Document your findings.
You should maintain documentation throughout the process, but make sure you have a prioritized list of harms and a list of unmitigated risks for disclosure.
Common AI governance challenges and solutions
Your individual experience with AI governance will vary based on your own organization’s needs and specifications, but you can expect three general challenges.
Uncontrolled use and deployment of AI
Most organizations do not have clear internal standards or regulations stipulating how or when employees should use AI. That only gets muddier with applications that have embedded AI features.
Solution: Thankfully, this is a good opportunity for your internal audit team. Develop advisory projects and audit your organization’s AI governance program with a focus on increased risk management and oversight of AI tools.
Undefined risks
It’s difficult to manage or predict risks if your organization doesn’t define them.
Solution: Develop AI risk review protocols that enable both monitoring and mitigation. Present current AI risk research and benchmarking from reliable sources, along with knowledge of current regulatory changes.
This is also where the internal audit team can help.
- Work with management to create a risk review process
- Quantify the impact of AI depending on individual internal use cases
- Organize cross-functional risk identification sessions to help understand how employees are using AI and the potential risks
- Identify where AI may impact operational processes, particularly in higher risk areas
Underdeveloped AI management team
This problem comes down to ownership. Who “owns” the AI governance and risk management? While most people see AI as a problem for the IT team, AI has the potential to affect nearly all aspects of a business, meaning it is worth getting all perspectives from your team.
Solution: Instead of putting AI governance on one team, create a full AI governance board that includes members from all teams in the organization. This gives you multiple perspectives and allows for a more holistic approach to risk and governance.
Through this AI governance board, develop clear values, principles, and goals for AI usage within the organization.
ABL: Always be learning
Implementing responsible AI governance comes with plenty of challenges, but it’s worth doing to mitigate risk, ensure long-term success, and create value. The good news: It’s never too late to start. In a survey of over 400 GRC and audit professionals in July 2025, AuditBoard found that only 25 percent of respondents had a fully implemented AI governance program.
Even better news: Balancing AI governance and innovation largely depends on your willingness to learn. Embedding AI education into your organizational culture keeps you and your team proactive and ahead of the competition.
Featured in: AI / Artificial Intelligence