AI Governance Workflow: From Use Case to Approval (US Guide 2026)

Share Article

Table of Contents

Eighty-six percent of US enterprises claim they have a complete AI inventory. The Purple Book Community’s State of AI Risk Management 2026 found that most of those inventories quietly exclude vendor-embedded models, employee-driven GenAI, and anything that didn’t enter through a formal door. The intake door, in other words, is the entire problem. An AI governance workflow is the system that builds and guards that door, moving every use case from a Slack message or a hallway pitch to a documented, risk-tiered, approved (or rejected) decision with an audit trail. This guide walks through the seven stages, the artifacts each stage produces, the roles that own them, and how to make the whole thing hold up to NIST AI RMF, ISO/IEC 42001, and the new wave of US state laws.

Why most AI governance fails before approval

Most failures in AI governance are not failures of judgment. They’re failures of routing. A use case never reaches the people who could have caught the problem because there is no clear path from “someone has an idea” to “the right reviewers see it in time.”

The downstream symptoms look like governance failures: a customer-service chatbot that quoted refund policies it invented, a screening tool that filtered out qualified candidates, a vendor model that quietly trained on customer data. The root cause is usually that no intake form existed, or it existed but had no teeth, or it had teeth but no SLA, so submissions went to a queue that nobody owned.

The Deloitte State of AI in the Enterprise 2026 report puts hard numbers on this: only one in five companies has a mature governance model for autonomous AI agents, even though agentic adoption is climbing fast. McKinsey’s State of AI Trust 2026 survey found the average responsible AI maturity score sits at 2.3 out of 5, with strategy and oversight lagging behind technical capability. Both findings point to the same gap: companies are deploying AI faster than they’re building the workflows that decide which AI gets deployed.

A working AI governance workflow does three things: it gives every use case a single front door, it forces a tier decision before any work is sanctioned, and it produces evidence at every stage that an auditor can reconstruct a year later.

Skip any of those and you don’t have governance. You have paperwork.

The seven stages of an AI governance workflow

Every workable AI governance workflow has roughly the same seven stages. The names vary, the tooling varies, the rigor at each stage varies by tier, but the sequence is consistent across mature programs. Here it is end to end:

  1. Intake. A standardized request enters the system. One front door, one form, one queue.
  2. Triage. Within five business days, a governance analyst confirms the submission is complete and assigns a provisional risk tier.
  3. Risk tiering. A scored assessment classifies the use case as low, medium, high, or prohibited based on impact, data sensitivity, autonomy, and population reached.
  4. Impact assessment. For medium and high-tier cases, a structured assessment documents harms, mitigations, fairness considerations, and human oversight design.
  5. Review. The right combination of legal, privacy, security, ethics, and business owners examines the assessment. Composition scales with tier.
  6. Approval (or rejection, or fast-track). A documented decision is made by the named decision authority for that tier, with conditions if approved.
  7. Post-deployment monitoring. The use case enters the AI inventory and the model lifecycle. Drift, scope changes, and vendor model swaps trigger re-entry to the workflow.

Two principles tie these together. First, every stage produces a named artifact: an intake record, a triage note, a tier assessment, an impact assessment, a review log, an approval memo, a monitoring plan. Second, every stage has an owner and an SLA. Without artifacts, you can’t prove governance happened. Without SLAs, the workflow becomes the reason the business stops cooperating.

The intake form: what to actually ask

A good intake form takes ten to fifteen minutes for a non-technical submitter to complete. Anything longer kills participation and pushes use cases back into the shadows. Anything shorter doesn’t generate enough signal for triage to assign a tier.

The fields that matter, in roughly the order they should appear:

  • Use case name and one-sentence description. Plain English. If the submitter can’t explain it without jargon, that’s a triage flag.
  • Business owner and submitting team. Named individuals, not departments.
  • Problem statement. What decision or task is the AI being asked to support or automate?
  • AI type. Predictive ML, generative AI, agentic AI, classification, recommendation, vision, speech. Multi-select.
  • Build vs. buy vs. embed. In-house model, third-party API (OpenAI, Anthropic, Google), embedded feature inside an existing SaaS tool, or vendor-fine-tuned.
  • Data inputs. What categories of data feed the system? Personal data, financial data, health data, employee data, customer content, public data.
  • Decision autonomy. Does the AI make a decision, recommend one, or just summarize? Critical for tiering.
  • Population affected. Customers, employees, applicants, patients, the public, and approximate scale.
  • Consequential decision domains touched. Eight under the Colorado AI Act: employment, housing, financial services, healthcare, education, insurance, government services, legal services. This single field will pre-screen most high-risk classifications.
  • Existing process being replaced or augmented. Helps reviewers compare baseline error rates.
  • Target launch date. Sets the SLA expectation honestly.

Two fields are easy to forget but pay off later: a free-text “what could go wrong” prompt that surfaces the submitter’s own intuitions, and a vendor name field that integrates with the procurement queue so legal review can run in parallel.

A 2024 Pertama Partners benchmark on enterprise intake processes found that organizations that publish their criteria openly approve 30 to 50 percent of submitted use cases, defer 20 to 30 percent, and reject the rest. Those numbers signal a healthy filter. A 95 percent approval rate means the form isn’t asking the right questions; a 5 percent approval rate means the business has stopped trusting the workflow and is routing around it.

Risk tiering: turning vague impact into a clear decision

Risk tiering is the load-bearing decision in the entire workflow. Get it right and the rest of the workflow scales: low-tier cases move fast, high-tier cases get the scrutiny they deserve. Get it wrong and either the business slows to a crawl or the next regulator audit becomes a forensics exercise.

The most defensible tier models score against four axes, then take the highest individual score as the tier:

AxisLow (Tier 1)Medium (Tier 2)High (Tier 3)Prohibited (Tier 4)
Impact on a personNo effect on opportunities, finances, health, or rightsInfluences a decision about a person but does not determine itMaterially influences or determines a consequential decisionManipulative use, social scoring, real-time biometric categorization
Data sensitivityPublic or non-personal data onlyInternal personal data, no special categoriesSpecial-category personal data, financial, health, biometricData acquired without lawful basis
AutonomySummarization or content drafting with full human reviewRecommendation that humans usually act onDecision executed without case-by-case human reviewFully autonomous decision in a regulated domain without override
Population reachSingle team, internalOne business unit, hundreds of usersCross-customer or workforce-widePublic-facing in a regulated decision context

Three rules keep this honest. First, the tier is set by the highest score on any axis, not an average: a low-impact tool processing biometric data is still high tier. Second, a fast-track path exists for genuinely Tier 1 cases (e.g., internal meeting summarization with no PII): triage analyst can approve in days, no committee required. Third, any submitter or reviewer can request escalation; tiers can move up but not down without explicit re-justification.

The Tier 4 row matters for any organization with EU exposure. Prohibited practices under the EU AI Act apply regardless of where the developer sits, and the August 2025 enforcement window for the prohibited-practices article has already begun. US-only firms still benefit from a defined Tier 4: it forces an explicit “we don’t do this” line that policy and legal can defend.

Roles and decision rights: who approves what

The single most common reason AI governance workflows stall is that nobody knows who actually decides. Designating an AI Review Board without a clear charter creates a venue, not a decision. The fix is a tier-based RACI:

StageTier 1 (Low)Tier 2 (Medium)Tier 3 (High)
Intake reviewGovernance analystGovernance analystGovernance analyst
Risk tier sign-offGovernance analystGovernance leadGovernance lead + Privacy/Legal
Impact assessment ownerSubmitter (light template)Submitter + Governance partnerJoint: business + Risk + Privacy
Approval authorityGovernance leadAI Review BoardAI Review Board + executive sponsor
Conditions enforcementBusiness ownerBusiness owner + GovernanceBusiness owner + Governance + Risk

A few practical points the org charts usually miss. The AI Review Board should have standing membership from legal, privacy, security/risk, data science, and at least one business representative. Five people make decisions, twelve people make minutes. The Chief AI Officer or equivalent owns the workflow itself, not individual approvals; their job is throughput and consistency, not vetoing use cases. And every tier needs a named escalation path: when reviewers can’t reach consensus, who breaks the tie, and how fast?

McKinsey’s 2026 RAI maturity data is clear about what separates leaders from laggards: it is not the existence of a committee, it is whether the committee has decision rights documented in a charter and whether those rights are honored when business pressure mounts.

Mapping the workflow to NIST AI RMF, ISO/IEC 42001, and US state law

A governance workflow that doesn’t map to recognized frameworks is just a process: useful internally, useless when an auditor or attorney general comes calling. The good news is that one well-designed workflow can satisfy NIST AI RMF, ISO/IEC 42001, and the dominant US state laws simultaneously, because they’re more aligned than they look.

NIST AI RMF (US baseline)

The four functions, Govern, Map, Measure, Manage, line up cleanly with the workflow. Govern lives in the charter and roles. Map is what the intake form and tiering process do. Measure happens inside the impact assessment and during post-deployment monitoring. Manage is the approval and ongoing controls. NIST RMF is voluntary at the federal level, but its language is now embedded in state safe-harbor provisions, federal agency guidance, and the Treasury Department’s February 2026 framework, which translates RMF principles into 230 operational control objectives for financial services.

ISO/IEC 42001 (AI Management System)

The international standard for an AIMS expects exactly this kind of documented intake-to-monitoring lifecycle. Clause 6 (Planning) corresponds to risk tiering and impact assessment. Clause 8 (Operations) corresponds to review, approval, and monitoring. Clause 9 (Performance Evaluation) corresponds to the KPIs and re-approval triggers covered later in this guide. For organizations pursuing ISO/IEC 42001 certification, the workflow is the operational evidence that the AIMS exists in practice, not just on paper.

Colorado AI Act (SB 24-205)

Effective June 30, 2026 after a delay from the original February date, this is the first comprehensive US state AI law. It applies to deployers of high-risk AI systems making consequential decisions in eight domains and requires a documented risk management policy aligned to NIST AI RMF or ISO/IEC 42001, an annual impact assessment per high-risk system, consumer disclosure when AI influences a consequential decision, and three-year record retention. Adoption of NIST AI RMF or ISO/IEC 42001 establishes a statutory safe harbor and creates a rebuttable presumption of reasonable care. Penalties run up to $20,000 per violation, enforced by the Colorado Attorney General.

Texas TRAIGA

Texas takes a similar safe-harbor approach: substantial compliance with NIST AI RMF or another recognized framework provides protection from enforcement actions. Penalties for uncurable violations range from $80,000 to $200,000. Internal red-team discoveries qualify for additional protection: a quiet but significant incentive to invest in pre-deployment review.

California (sectoral)

California’s Health Care Services AI Act now requires disclosure when generative AI is used in patient communications. Several additional bills target clinical decision support and algorithmic pricing. None of these create a single comprehensive AI law, but each adds a row to the impact-assessment template.

The practical conclusion: build the workflow around NIST AI RMF as the baseline vocabulary, document it inside an ISO/IEC 42001-style AIMS structure, and the state-law obligations largely fall out as artifacts you’re already producing. Trying to retrofit framework alignment after the workflow exists is significantly harder than building to the frameworks from the start.

Adapting the workflow for generative and agentic AI

Generative AI and agentic AI break two assumptions that traditional model-risk workflows rely on. First, they break the assumption that the use case is bounded: a GenAI tool deployed for one task gets used for ten others within a quarter. Second, agentic AI breaks the assumption that humans are in the loop for each decision; an agent that books meetings, files tickets, or executes trades makes thousands of micro-decisions between approval gates.

Three concrete adaptations cover most of the gap.

  • Scope-of-use as a first-class artifact. GenAI approvals should specify what the tool may be used for and what it may not, not just at deployment, but as an enforced limit. Approval to use a GenAI assistant for drafting marketing copy is not approval to use it for HR screening. A scope statement attached to the approval, surfaced in the user interface, and re-asserted at usage time keeps drift in check.
  • Pattern-level approval for agents. Agentic AI is rarely one decision: it’s a class of decisions. The intake form should capture the agent’s permitted action set, the tools and APIs it may invoke, the data domains it may touch, and the explicit boundary conditions where it must escalate to a human. Approval is for the pattern, not for each invocation, but the pattern needs to be defensibly narrow.
  • Triggered re-review for capability changes. When the underlying foundation model changes, a vendor swap from one model to another, a major version upgrade, a fine-tune, the use case re-enters the workflow. This is not a full re-approval; for stable Tier 2 cases it can be a delta review. But it must happen, because the model is what was approved, not the wrapper around it.

A 2026 Aon AI risk briefing referenced the Stanford AI Index data showing 233 documented harmful AI incidents in 2024, a 56 percent year-over-year increase. A material share of those incidents involved GenAI used outside its approved scope or agents acting beyond their reviewed boundaries. The workflow adaptations above are designed for exactly that pattern.

Post-approval: monitoring, re-approval, and retirement

Approval is the start of the lifecycle, not the end of it. Most workflows treat post-approval as someone else’s problem, usually MLOps’s, and lose visibility within months. The practical fix is to make monitoring a defined seventh stage of the workflow, with named triggers that pull a use case back in for re-review.

The triggers worth naming explicitly:

  • Performance drift beyond a defined threshold. Set the threshold at approval time, not later. Accuracy, false-positive rate, hallucination rate for GenAI, whatever the relevant metric is.
  • Scope expansion. New user populations, new data sources, new decision types within the same tool.
  • Vendor model change. Foundation model upgrade, vendor switch, fine-tune, or material change to the system prompt.
  • Regulatory change. New state law, new sectoral guidance, change in NIST or ISO references.
  • Material incident. A near-miss, a complaint, a discovered bias issue, a security event.
  • Annual review. For all Tier 2 and above, regardless of triggers. The Colorado AI Act requires this for high-risk systems anyway.

Retirement matters too. When a use case is decommissioned, the AI inventory should record it as retired, with a short post-mortem capturing why and what the replacement is (if any). Auditors increasingly ask for retirement evidence as a way to test whether the inventory is real-time or theatrical.

KPIs that prove the workflow is working

Without measurement, the workflow becomes a black box that’s either “fine” or “the bottleneck.” A small KPI set keeps it honest:

  • Cycle time by tier. Tier 1: 5 business days from intake to decision. Tier 2: 15. Tier 3: 30. Track median and 90th percentile.
  • Approval rate by tier. Healthy ranges: Tier 1 around 80 to 90 percent, Tier 2 around 60 to 75 percent, Tier 3 around 40 to 60 percent. Outside those ranges, investigate why.
  • Inventory completeness. Estimated AI usage in the org versus inventoried use cases. Periodic discovery scans (browser-based, SSO log analysis) test the gap.
  • Re-review rate. What percentage of approved use cases re-entered the workflow in the last quarter, and for which trigger reasons? A zero re-review rate is a red flag, not a success metric.
  • Time-to-rejection. How fast does the workflow tell submitters “no”? Slow rejections damage trust more than fast rejections.
  • Audit-readiness sample test. Quarterly, pull five random approved use cases and verify the full artifact chain exists. Score the gap.

Report these to the AI Review Board monthly and to the executive sponsor quarterly. The numbers force the conversation about whether the workflow is enabling the business or strangling it, and that’s the conversation governance leaders should want to be having.

Common pitfalls and how to avoid them

A few patterns recur across organizations that built a workflow and watched it underperform.

  • Governance fatigue. When the same five people are asked to review every use case at every tier, throughput collapses within a quarter. Fix: tier-based delegation, a published approval matrix, and rotating ARB membership for non-voting observers so capacity scales.
  • The “AI police” perception. When governance only ever says no or asks for more documentation, business teams route around it. Fix: publish an internal “approved AI” catalog of pre-cleared tools and patterns. Make it easier to do the right thing than the wrong thing.
  • Paper compliance. Producing artifacts nobody reads, scored by people who didn’t build the system, signed by executives who didn’t read the artifacts. Fix: mandatory walkthrough of the impact assessment with the business owner, not just sign-off in a workflow tool.
  • The shadow inventory. The official inventory shows 40 use cases; a discovery scan finds 200. The official inventory is the lie. Fix: combine top-down intake with bottom-up discovery, and treat unknown AI as a security incident, not a curiosity.
  • One-size committee. Sending a Tier 1 internal-summarization tool through the full ARB review erodes trust in the workflow. Fix: enforce the fast-track lane and audit it, rather than abolishing it.

The throughline across all of these: the workflow exists to make good AI faster and bad AI slower. Anything else means the design needs another pass.

FAQ

1. Who owns the AI governance workflow?

In most US enterprises, the workflow itself is owned by the Chief AI Officer or equivalent (head of responsible AI, head of AI governance). The owner is responsible for design, throughput, and consistency, not for individual approval decisions. Decision authority sits with the AI Review Board for medium- and high-tier cases and with delegated leads for low-tier cases.

2. How long should AI use case approval take?

Reasonable SLAs are 5 business days for low-tier (Tier 1), 10 to 15 days for medium-tier (Tier 2), and 20 to 30 days for high-tier (Tier 3) cases. These ranges align with benchmarks from enterprise intake processes published by Pertama Partners and others. If your average exceeds these significantly, the bottleneck is usually missing decision rights, not insufficient rigor.

3. What’s the difference between AI governance and model risk management?

Model risk management (MRM), shaped by SR 11-7 in financial services, focuses on quantitative model performance, validation, and ongoing monitoring of statistical models. AI governance is broader: it covers the same lifecycle for ML models, plus generative AI, agentic AI, vendor-embedded AI, and the policy, ethics, and compliance dimensions MRM doesn’t touch. Mature programs run the two as complementary, with MRM as a specialized track inside the wider AI governance workflow.

4. Do small companies need a formal AI governance workflow?

Yes, but proportionately. A 50-person company doesn’t need a 12-member review board, but it does need a single front door for AI requests, a basic risk tier assessment, and a named approver. The Colorado AI Act and TRAIGA do not exempt small companies that act as deployers of high-risk AI; the documentation expectations scale to size, but the obligation does not disappear.

5. How do you handle vendor or third-party AI in the workflow?

Treat vendor AI as a use case, not a procurement event. The intake form should flag third-party AI, route a parallel vendor review (data handling, security posture, model card availability, contractual terms on training data), and require the vendor to provide documentation that supports your impact assessment. Embedded AI features inside larger SaaS platforms are the hardest case: they often enter without intake. Periodic discovery scans and SaaS inventory reviews close that gap.

6. What triggers re-approval of an already-deployed AI use case?

Six standard triggers: performance drift beyond a defined threshold, scope expansion, foundation model change or vendor swap, regulatory change, material incident, and annual review for Tier 2 and above. Re-approval is rarely a full new approval: for stable use cases it’s a delta review focused on what changed. The Colorado AI Act explicitly requires impact assessment refresh within 90 days of a substantial intentional modification.

Final takeaway

Most AI failures in 2026 will not look like AI failures. They will look like governance failures with AI underneath: a model deployed for a purpose nobody approved, a vendor swap nobody knew happened, a customer-facing decision nobody could explain after the fact. The workflow described here is the smallest defensible version of the system that prevents those outcomes: one front door, four tiers, named approvers, named artifacts, named SLAs, and explicit triggers that pull use cases back when something changes.

For teams ready to move from process design to formal capability, the GAICC ISO/IEC 42001 Lead Implementer training builds exactly this workflow inside an internationally recognized AI Management System framework, including the artifacts, audits, and continuous improvement loops US regulators are starting to expect by default.

Stay ahead of the curve

Join 5,000+ industry leaders who receive our weekly briefing on AI governance and secure enterprise collaboration.

About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

Globally certified instructor in ISO/IEC, PMI®, TOGAF®, and Scrum.org disciplines with hands-on experience in ISO/IEC 42001 AI governance across the US, EU, and Asia-Pacific.

Summarize with AI

AI-Powered Data Governance Platform

Secure, Govern, and Collaborate on Sensitive Data—All Within Microsoft 365

Further Reading

Related Insights

AI Governance Approval Process: Who Decides What?

A 2025 IBM study found that organizations with formal AI governance ship AI projects 28%

Read More →

How to Build an AI Inventory (Even If You Don’t Know What Exists)

A 2025 UpGuard study of 500 security leaders and 1,000 employees found that 81% of

Read More →

AI Governance Checklist: 25 Things Every Organization Must Have in 2026

Introduction In October 2025, a Fortune 500 insurer paused its claims-automation pilot after an internal

Read More →

Summarize with AI

Transforming AI Risks into Strategic Assets.

Request a Personalized Demo

Our governance experts will walk you through the platform and help you map out your ISO 42001 or EU AI Act roadmap.