AI Governance Approval Process: Who Decides What?

Share Article

Table of Contents

A 2025 IBM study found that organizations with formal AI governance ship AI projects 28% faster than those without one. The counterintuitive lesson: clear approval rules don’t slow AI down, ambiguity does. When nobody knows who can greenlight a customer-facing chatbot or pause a misbehaving model, every decision becomes a meeting, and every meeting becomes a delay.

The catch is that “AI governance approval” isn’t one process. It’s a layered system with different decision-makers for different risk tiers, different lifecycle stages, and different categories of harm. Most failures happen at the seams between those layers.

This article maps the full approval landscape: who owns intake, who classifies risk, who signs off on production deployment, who can pull the plug, and how leading US enterprises and federal agencies are structuring these decisions in 2026.

Why AI Approval Is Different from Traditional IT Approval

Traditional change advisory boards (CABs) were built for software that behaves the same way every time you run it. AI breaks that assumption.

A model that performed well on test data can drift in production. A vendor’s underlying foundation model can be silently updated. A low-risk internal tool can become a high-risk customer-facing one when a product manager exposes it to a new audience. Each of these scenarios requires re-approval, not because the code changed but because the risk profile did.

This is the operating reality that ISO/IEC 42001 codifies. The standard’s clauses on operational planning and control (Clause 8) explicitly require organizations to manage AI systems across their full lifecycle, with approval as a recurring checkpoint, not a one-time gate. The NIST AI Risk Management Framework reaches the same conclusion through its “Govern, Map, Measure, Manage” structure: approval decisions need to be reassessed every time the system, its data, or its context shifts meaningfully.

What this means in practice:

  • Approval is recurring, not terminal. A green light at deployment does not survive a model update, a data source change, or a new user population.
  • Decision rights must be tied to risk tier, not project size. A $5,000 internal tool that touches employee performance data needs more scrutiny than a $500,000 backend optimization model.
  • Approval evidence becomes audit evidence. Under the EU AI Act (which applies to any US company serving EU residents) and emerging state laws like the Colorado AI Act, approval records are legally discoverable artifacts.

The organizations getting this right treat approval as an operating model, not a policy document.

The Six Roles That Make Up an AI Approval Process

Every functional AI approval process in 2026 maps to six core roles. Companies use different titles, but the responsibilities are remarkably consistent across enterprise, mid-market, and federal contexts.

1. The Use Case Owner (Business Sponsor)

The use case owner is the business leader who wants the AI system built or bought. They register the system in the AI inventory, articulate the intended use, name the affected user populations, and own the business outcomes.

What they decide: whether to initiate a use case at all, what success looks like, and which constraints are non-negotiable.

What they don’t decide: risk tier, technical controls, or whether the system can go live. Those rights belong elsewhere.

2. The Model or System Owner (Technical Lead)

This is the person responsible for how the system actually works, typically a data science lead, ML engineer, or product engineering manager. They own model selection, training data, performance metrics, and the technical documentation that goes into approval packages.

In ISO/IEC 42001 terms, this role provides the operational evidence that Clause 8 controls are in place. In NIST AI RMF terms, they execute the “Map” and “Measure” functions for the specific system.

3. The AI Risk Reviewer (Second Line of Defense)

This is where most AI governance programs are still maturing. The risk reviewer sits in a second-line function, typically risk management, compliance, or a dedicated AI governance office. Their job is to challenge the use case owner and the model owner with questions like: What’s the worst-case failure mode? Who gets harmed if the model is wrong? What does the bias testing actually show?

Risk reviewers don’t approve systems. They produce assessments and recommend a risk tier. Their independence from the build team is what gives the approval process integrity.

4. The AI Governance Committee or Board

The committee is the cross-functional body that holds final approval authority for medium- and high-risk AI systems. A typical US enterprise committee includes representatives from legal, privacy, security, data science, the relevant business unit, and increasingly, HR (for employment-related AI) and ethics or DEI (for consumer-facing systems).

OneTrust’s published governance model shows how this works in practice: an executive-level committee meets quarterly to set policy, while smaller working groups meet more frequently to review individual use cases and use electronic voting for time-sensitive decisions between meetings. This two-tier rhythm is what keeps approval from becoming a bottleneck.

5. The Chief AI Officer or AI Governance Lead

In the US federal government, this role is now mandatory. Under OMB Memorandum M-25-21 (issued April 2025), every covered agency must designate a Chief AI Officer with agency-wide visibility into AI activities. The General Services Administration’s CAIO, for example, chairs both the AI Governance Board and the AI Oversight Committee.

In the private sector, the equivalent title varies (Chief AI Officer, Head of AI Governance, AI Ethics Officer, sometimes folded into the CISO or Chief Data Officer role). The function is the same: maintain the enterprise AI inventory, escalate cross-cutting issues, and own the relationship with regulators and external auditors.

6. The Independent Validator or Internal Auditor (Third Line of Defense)

The third line provides assurance that the first two lines are doing their jobs. For AI specifically, this means independently testing models for the failure modes flagged in the risk assessment, reviewing approval packages for completeness, and reporting to the audit committee or board on the health of the AI governance program.

Validators don’t approve individual systems either. They certify that the approval process itself is operating as designed.

How Risk Tiering Determines Who Approves What

Not every AI system needs the same approval rigor. Mature programs route systems through different approval paths based on risk tier, which is why intake and tiering are arguably the two most important steps in the entire process.

The four-tier model that most US enterprises now use looks like this:

Risk TierExamplesRequired ApproversTypical Timeline
MinimalInternal productivity tools, code completion assistants used by a single developer, document summarization for personal useUse case owner + manager sign-off; logged in inventory1–3 days
LimitedInternal analytics dashboards using AI, customer support routing, marketing copy drafting with human reviewUse case owner + risk reviewer + business unit head1–2 weeks
HighCustomer-facing chatbots, hiring or promotion screening, credit decisioning, medical triage, content moderation at scaleFull AI Governance Committee, with legal, privacy, and ethics representation4–8 weeks
UnacceptableReal-time biometric surveillance in public spaces, social scoring of citizens, manipulative AI targeting vulnerable groupsProhibited; cannot be approved through standard processN/A

This tiering is directly inspired by the EU AI Act’s risk categorization, but US enterprises have adopted it because it works regardless of regulatory geography. A high-risk system in Texas faces the same failure modes as a high-risk system in Frankfurt, even if the legal exposure differs.

The key insight: most AI systems should fall into the minimal or limited tiers, and the approval process for those tiers should be lightweight enough that nobody is tempted to skip it. If your governance program is forcing every AI experiment through committee, you’ll either bottleneck innovation or push teams into shadow AI, where ungoverned tools get used without registration.

Guidehouse’s research on financial services AI governance puts it directly: governance committees should formalize roles across three lines of defense, where the first line builds and assesses, the second line governs and challenges, and the third line provides assurance. Tiering is what makes that three-line model practical at scale.

The Six-Stage Approval Lifecycle

The actual approval workflow inside an AI governance program follows a consistent pattern, regardless of industry. Each stage has a distinct decision and a distinct decision-maker.

Stage 1: Intake and Registration

Every AI system enters the governance program through a single intake form. The use case owner registers the system, names the model owner, describes the intended use, identifies the user population, and lists the data sources. This information feeds the AI inventory.

Decision: Does this use case meet the minimum criteria to proceed (e.g., does it have a named owner, a defined business case, and access to required data)?

Decided by: AI governance program manager or intake coordinator.

Stage 2: Risk Tiering

The risk reviewer assesses the registered use case and assigns a risk tier. This is typically done using a structured questionnaire that probes for harm potential, scale of impact, reversibility of decisions, vulnerability of affected populations, and whether the system makes or recommends consequential decisions about people.

Decision: What tier does this system belong in, and what controls and approvals does that tier trigger?

Decided by: AI risk reviewer, with escalation to the governance lead if the tier is contested.

Stage 3: Documentation and Impact Assessment

For limited-risk and above, the model owner produces a system documentation package: model cards, data sheets, intended-use statements, known limitations, bias testing results, and security review findings. For high-risk systems, an algorithmic impact assessment (AIA) is required.

Decision: Is the documentation complete and does it accurately characterize the system?

Decided by: AI risk reviewer for completeness; legal and privacy reviewers for compliance accuracy.

Stage 4: Validation and Testing

Independent validators test the system against the failure modes flagged in the risk assessment. For high-risk systems, this includes bias testing across protected classes, adversarial testing for security, and human-in-the-loop testing for decision override workflows.

Decision: Does the system meet the performance, fairness, and safety thresholds defined for its tier?

Decided by: Validation team, with sign-off from the risk reviewer.

Stage 5: Approval Decision

For minimal-risk systems, the use case owner’s manager approves and the system enters the inventory as live. For limited-risk, the risk reviewer and business unit head co-approve. For high-risk, the AI Governance Committee meets, reviews the full package, and votes.

Decision: Can this system go to production, and if so, with what conditions, monitoring requirements, and recertification dates?

Decided by: Tier-appropriate approver, as defined above.

Stage 6: Monitoring and Recertification

Approval is not permanent. The model owner is responsible for ongoing monitoring of model performance, drift, and incidents. Every system has a recertification date, typically annually for low-risk systems and quarterly for high-risk ones. Material changes to the model, data, or use trigger immediate re-approval.

Decision: Does this system still operate within its approved parameters, and if not, what changes need re-approval?

Decided by: Model owner monitors and flags; risk reviewer determines whether re-approval is needed.

Who Has the Authority to Pause or Pull a Live AI System?

This is the question most governance documents leave vague, and it’s the one that matters most when something goes wrong. The 2025 incidents involving large customer-facing AI systems have made one principle clear: the authority to pause must be widely distributed and unambiguous.

A defensible kill-switch authority structure looks like this:

Authority LevelWho Holds ItWhen They Can Use It
Immediate pauseModel owner, security team, on-call SREActive incident, security breach, severe performance degradation, regulatory subpoena
24-hour suspensionUse case owner, AI risk reviewerMaterial accuracy or fairness concern, vendor incident, customer complaint pattern
Indefinite suspensionAI Governance Committee, Chief AI OfficerFailed recertification, material risk profile change, regulatory inquiry
Permanent retirementAI Governance Committee with executive sponsorEnd of useful life, replaced by alternative, business case no longer holds

The principle: anyone closer to the system has narrower authority but faster response time. Anyone further away has broader authority but slower response. Both are necessary.

GSA’s federal model formalizes this through its AI Oversight Committee, which has authority to review every AI request and enforce privacy and security controls before deployment, with the Chief AI Officer maintaining agency-wide visibility to act when issues span multiple systems. Private-sector enterprises building their own structures should look at this federal pattern, not because they have to comply with M-25-21, but because the agency model has been pressure-tested under heavier accountability than most private programs face.

The Three Most Common Approval Process Failures

Pattern analysis across published case studies and incident reports surfaces three failure modes that show up repeatedly.

Failure 1: The Shadow AI Problem

When approval feels like a tax rather than a service, teams route around it. They use vendor AI features that ship inside SaaS tools they already have. They build prototypes they call “experiments” to avoid the governance process. They use personal accounts to access foundation models from home.

The fix is not more enforcement. It’s making the lightweight tier genuinely lightweight: a 10-minute intake form, automated risk tiering for clearly-low-risk use cases, and immediate approval for tools that fall below a defined harm threshold. If governance can’t approve a chatbot prototype in three days, governance is the problem.

Failure 2: Unclear Decision Rights at the Tier Boundary

Most disputes inside AI governance programs happen at the boundaries between risk tiers. A team will argue their hiring tool is “limited risk” because it only screens resumes, not “high risk” because no human is replaced. The risk reviewer will argue the opposite.

The fix is to write down the boundary criteria explicitly and to give the AI risk reviewer final authority on tier assignment, with appeal to the governance committee. Without a designated tiebreaker, every borderline case becomes an escalation.

Failure 3: Approval Without Monitoring

A system gets approved, goes live, and then nobody looks at it again until something breaks. The OECD has documented this pattern repeatedly in post-incident reviews of public-sector AI failures: the approval was sound, the deployment was clean, but the model drifted over 18 months and the approval never got revisited.

The fix is to make recertification automatic and non-optional. Every approval should ship with a recertification date and a monitoring SLA. If the recertification date passes without action, the system status auto-flips to “expired” and the model owner has to reactivate it through the governance process.

How ISO/IEC 42001 and NIST AI RMF Shape US Approval Processes

US enterprises don’t operate under a single AI law the way EU companies do under the EU AI Act. The federal regulatory picture is fragmented: OMB memoranda govern federal agencies, the FTC enforces against unfair or deceptive AI practices, sectoral regulators like the OCC and CFPB issue model risk guidance, and states like Colorado, California, and Texas have passed their own AI laws (some of which are now in litigation following the December 2025 federal preemption executive order).

This is precisely why ISO/IEC 42001 has become the de facto governance backbone for US enterprises in 2026. The standard provides what the regulatory patchwork doesn’t: a single management system standard that maps to NIST AI RMF, satisfies most state law requirements, and demonstrates due care to the FTC.

For approval processes specifically, ISO/IEC 42001 contributes:

  • Clause 5 (Leadership): Establishes that top management owns the approval framework and is accountable for AI outcomes. This pushes approval authority up to the board level for the highest-risk decisions.
  • Clause 6 (Planning): Requires risk-based thinking, which is what justifies tiered approval paths instead of one-size-fits-all gates.
  • Clause 8 (Operations): Mandates lifecycle controls, which is the formal basis for recertification requirements.
  • Clause 9 (Performance Evaluation): Requires internal audit, which validates that the approval process is working as designed.

NIST AI RMF complements this by providing the risk vocabulary that approval committees use to discuss specific systems. When a committee asks “what does the bias testing actually show,” they’re invoking the “Measure” function. When they ask “who’s responsible if this fails,” they’re invoking the “Govern” function.

US enterprises that want a single, coherent answer to “how should we structure our AI approval process” are increasingly building ISO/IEC 42001-aligned management systems with NIST AI RMF risk language layered on top. It’s the combination, not either standard alone, that produces a defensible approval architecture.

For organizations looking to formalize this approach, the GAICC ISO/IEC 42001 Lead Implementer training is built around exactly this kind of integration.

What Good Looks Like: A Reference Approval Architecture for US Enterprises

Pulling everything together, here’s what a defensible AI governance approval process looks like in a typical US enterprise (1,000–10,000 employees) in 2026:

Governance bodies:

  • AI Governance Committee (chaired by Chief AI Officer or equivalent), meeting monthly, with quorum requirements and electronic voting between meetings for time-sensitive items.
  • AI Risk Working Group (cross-functional reviewers), meeting weekly to triage intake and recommend risk tiers.
  • Executive AI Steering Committee (C-suite), meeting quarterly to set policy and approve enterprise-level risk appetite.

Decision rights:

  • Use case owner: initiates and owns business outcomes.
  • Model owner: builds, documents, and monitors.
  • Risk reviewer: assesses and tiers.
  • Governance Committee: approves limited-risk and above; sets policy.
  • Chief AI Officer: maintains inventory, escalates cross-cutting issues, owns external reporting.
  • Internal Audit: independently assures the governance process annually.

Approval timelines (target):

  • Minimal-risk: 3 business days from intake to approval.
  • Limited-risk: 10 business days.
  • High-risk: 30 business days, with potential extension for material findings.

Monitoring cadence:

  • Minimal-risk: annual recertification.
  • Limited-risk: semi-annual recertification, with continuous incident monitoring.
  • High-risk: quarterly recertification, with real-time performance and bias monitoring.

This isn’t theoretical. It’s the structure that’s emerged from public case studies at OneTrust, Guidehouse’s financial services research, GSA’s federal implementation, and the dozens of mid-market enterprises that have published their AI governance frameworks in the past 18 months.

FAQs

1. Who has final approval authority for AI systems in a US enterprise?

For high-risk AI systems, final approval typically rests with a cross-functional AI Governance Committee that includes legal, privacy, security, data science, and the affected business unit. For lower-risk systems, approval can be delegated to the use case owner’s manager or the AI risk reviewer. The Chief AI Officer or equivalent role typically owns the overall framework but doesn’t approve individual systems.

2. What’s the difference between an AI risk reviewer and the AI Governance Committee?

The AI risk reviewer is a second-line function that independently assesses each system, recommends a risk tier, and produces the documentation package. The AI Governance Committee is the cross-functional body that uses that assessment to make the final approval decision for medium- and high-risk systems. The reviewer assesses; the committee decides.

3. Do all AI systems need committee approval?

No, and they shouldn’t. Most AI systems should be classifiable as minimal or limited risk and approvable through lightweight workflows that don’t involve committee review. Committee approval should be reserved for high-risk systems, novel use cases that don’t fit existing patterns, and systems that materially change the organization’s risk profile. Forcing every AI use case through committee creates bottlenecks and pushes teams toward shadow AI.

4. How does the EU AI Act affect US companies’ approval processes?

Any US company that offers AI-enabled products or services to EU residents is subject to the EU AI Act, regardless of where the company is headquartered. For high-risk AI systems under the Act, this means mandatory risk management, documentation, human oversight, and post-market monitoring. Most US enterprises now design their approval processes to satisfy EU AI Act requirements as the highest-common-denominator standard, which simplifies global compliance.

5. What’s the role of the Chief AI Officer in approval decisions?

The Chief AI Officer typically does not approve individual AI systems. Their role is to maintain the enterprise AI inventory, ensure the governance framework is functioning, escalate cross-cutting risks, and serve as the accountable executive for AI governance to the board and external stakeholders. In federal agencies, OMB Memorandum M-25-21 makes the CAIO role mandatory; in the private sector, the title varies but the function is increasingly standard.

6. How often should approved AI systems be re-reviewed?

Recertification cadence should match risk tier. High-risk systems typically require quarterly review, limited-risk systems semi-annually, and minimal-risk systems annually. Material changes to the model, training data, vendor, user population, or use case should trigger immediate re-approval regardless of the calendar. Under ISO/IEC 42001 Clause 9, performance evaluation and management review are recurring obligations, not one-time events.

7. Who can pause a live AI system if something goes wrong?

Pause authority should be distributed by response time and scope. The model owner and security team can immediately pause for active incidents. The AI risk reviewer can suspend for 24 hours pending review. The AI Governance Committee or Chief AI Officer can suspend indefinitely. This layered structure ensures fast response when needed without requiring committee meetings for emergencies.

8. Does ISO/IEC 42001 certification require a specific approval structure?

ISO/IEC 42001 doesn’t prescribe a specific approval workflow, but it does require organizations to demonstrate that they have documented, risk-based, and consistently applied processes for AI lifecycle management. In practice, this means certified organizations have something close to the six-stage lifecycle described in this article, with clear roles, documented decisions, and audit-ready evidence at each stage.

Final Takeaway

The organizations that win at AI governance in 2026 aren’t the ones with the strictest approval processes. They’re the ones with the clearest decision rights, the most appropriate friction at each risk tier, and the discipline to revisit decisions as systems evolve.

If your organization is still figuring out who decides what, the practical next step is to inventory every AI system currently in use (including the shadow ones), assign each a provisional risk tier, and identify which of the six roles described above are missing or unclear in your current structure. That gap analysis is what turns AI governance from an aspiration into an operating model.

Stay ahead of the curve

Join 5,000+ industry leaders who receive our weekly briefing on AI governance and secure enterprise collaboration.

About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

Globally certified instructor in ISO/IEC, PMI®, TOGAF®, and Scrum.org disciplines with hands-on experience in ISO/IEC 42001 AI governance across the US, EU, and Asia-Pacific.

Summarize with AI

AI-Powered Data Governance Platform

Secure, Govern, and Collaborate on Sensitive Data—All Within Microsoft 365

Further Reading

Related Insights

How to Build an AI Inventory (Even If You Don’t Know What Exists)

A 2025 UpGuard study of 500 security leaders and 1,000 employees found that 81% of

Read More →

AI Governance Workflow: From Use Case to Approval (US Guide 2026)

Eighty-six percent of US enterprises claim they have a complete AI inventory. The Purple Book

Read More →

AI Governance Checklist: 25 Things Every Organization Must Have in 2026

Introduction In October 2025, a Fortune 500 insurer paused its claims-automation pilot after an internal

Read More →

Summarize with AI

Transforming AI Risks into Strategic Assets.

Request a Personalized Demo

Our governance experts will walk you through the platform and help you map out your ISO 42001 or EU AI Act roadmap.