Lab 7

Governance Framework

50 min 8 sections 2 prerequisites

What You'll Learn

  • Design a comprehensive AI governance framework
  • Create practical policies and procedures
  • Build assessment and monitoring tools

Prerequisites

  • 4.2-organizational-change
  • 4.3-risk-and-governance
Part 1 of 8

Lab Overview

AI without governance is a liability waiting to happen. In this lab, you'll design a governance framework that enables AI innovation while protecting your organization from risks.

What you'll create:

  • A complete AI governance structure
  • Practical policies ready for implementation
  • Assessment tools and checklists

Part 2 of 8

Part 1: Assessment Context (10 minutes)

Define Your Organization

Choose one:

  • Your actual organization
  • A hypothetical company in your industry
  • A provided scenario

Complete the Context Template

ORGANIZATIONAL CONTEXT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Organization: [Name/Type]
Industry: [Industry]
Size: [Employees, revenue]
AI Maturity: [None/Emerging/Established/Advanced]

Current AI Usage:
□ Consumer AI tools (ChatGPT, etc.)
□ Departmental AI solutions
□ Enterprise AI platforms
□ Custom AI development
□ AI embedded in purchased software

Key Stakeholders:
• Executive sponsor: [Role]
• Primary user groups: [List]
• IT/Security: [Role]
• Legal/Compliance: [Role]
• HR: [Role]

Regulatory Environment:
• Industry regulations: [List]
• Geographic regulations: [List]
• Internal policies: [List]

Risk Tolerance:
[Low / Medium / High] - Because [reason]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Part 3 of 8

Part 2: Governance Structure (10 minutes)

Design the Organizational Structure

AI GOVERNANCE STRUCTURE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

EXECUTIVE OVERSIGHT
Role: AI Executive Sponsor
Responsibilities:
• [Responsibility 1]
• [Responsibility 2]
• [Responsibility 3]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

AI GOVERNANCE COMMITTEE
Members: [List roles]
Meeting Frequency: [Monthly/Quarterly]
Responsibilities:
• [Responsibility 1]
• [Responsibility 2]
• [Responsibility 3]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

AI CENTER OF EXCELLENCE (if applicable)
Lead: [Role]
Functions:
• [Function 1]
• [Function 2]
• [Function 3]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

OPERATIONAL ROLES
AI Product Owners: [Where they sit]
AI Analysts: [Where they sit]
End Users: [How they're supported]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Define Decision Rights

Decision Type Who Decides Who Advises Who's Informed
New AI use case approval
Data usage decisions
Vendor selection
Budget allocation
Policy exceptions
Incident response

Part 4 of 8

Part 3: Policies (15 minutes)

Policy 1: Acceptable Use

AI ACCEPTABLE USE POLICY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

PURPOSE
This policy defines acceptable and prohibited uses of AI tools
within [Organization].

SCOPE
Applies to all employees, contractors, and partners using
AI tools for business purposes.

ACCEPTABLE USES
✓ [Approved use 1]
✓ [Approved use 2]
✓ [Approved use 3]
✓ [Approved use 4]
✓ [Approved use 5]

PROHIBITED USES
✗ [Prohibited use 1]
✗ [Prohibited use 2]
✗ [Prohibited use 3]
✗ [Prohibited use 4]
✗ [Prohibited use 5]

DATA HANDLING REQUIREMENTS
• Never input: [List sensitive data types]
• Always anonymize: [Data requiring anonymization]
• Approved data: [Data types OK to use]

HUMAN OVERSIGHT REQUIREMENTS
• [When human review is required]
• [When human approval is required]
• [When human must remain in loop]

QUALITY REQUIREMENTS
• [Quality standard 1]
• [Quality standard 2]
• [Quality standard 3]

VIOLATIONS
Violations of this policy may result in [consequences].
Report violations to [contact].

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Policy 2: AI Procurement/Development

AI PROCUREMENT & DEVELOPMENT POLICY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

APPROVAL REQUIREMENTS

Tier 1: Departmental Authority
• Consumer AI tools (free tiers)
• Experimental/pilot use only
• No sensitive data
• Requires: Manager approval

Tier 2: IT/Governance Review
• Paid AI subscriptions
• Production use cases
• May handle business data
• Requires: IT Security review, Data Privacy review

Tier 3: Executive Approval
• Enterprise AI platforms
• Custom AI development
• Handles sensitive/regulated data
• Requires: Full governance committee review

EVALUATION CRITERIA
All AI solutions must be evaluated against:
□ Business justification
□ Data privacy compliance
□ Security requirements
□ Bias and fairness considerations
□ Integration requirements
□ Total cost of ownership
□ Vendor stability

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Policy 3: Incident Response

AI INCIDENT RESPONSE POLICY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

INCIDENT DEFINITION
An AI incident is any event where AI systems:
• Produce harmful or inappropriate output
• Expose sensitive data
• Make decisions that cause harm
• Fail in a way that impacts business operations
• Violate regulatory requirements

SEVERITY LEVELS
Critical: [Definition, examples]
High: [Definition, examples]
Medium: [Definition, examples]
Low: [Definition, examples]

RESPONSE PROCEDURES

Immediate (< 1 hour):
1. [Step 1: Contain]
2. [Step 2: Assess]
3. [Step 3: Notify]

Short-term (< 24 hours):
4. [Step 4: Investigate]
5. [Step 5: Communicate]
6. [Step 6: Remediate]

Long-term (< 1 week):
7. [Step 7: Root cause]
8. [Step 8: Prevent]
9. [Step 9: Document]

ESCALATION CONTACTS
• Primary: [Role, contact]
• Backup: [Role, contact]
• Executive: [Role, contact]
• Legal: [Role, contact]

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Part 5 of 8

Part 4: Assessment Tools (10 minutes)

AI Use Case Assessment Checklist

AI USE CASE ASSESSMENT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Use Case: ____________________
Requested by: ____________________
Date: ____________________

BUSINESS JUSTIFICATION
□ Problem clearly defined
□ AI is appropriate solution (not overcomplicated)
□ Success metrics defined
□ ROI documented

DATA ASSESSMENT
□ Data sources identified
□ Data classification completed
□ No prohibited data types
□ Privacy impact assessment (if needed)
□ Data retention plan defined

TECHNICAL ASSESSMENT
□ Integration requirements documented
□ Security review completed
□ Performance requirements defined
□ Disaster recovery plan

OPERATIONAL ASSESSMENT
□ Human oversight model defined
□ Quality controls documented
□ Feedback mechanism in place
□ Training plan for users

RISK ASSESSMENT
□ Risks identified and rated
□ Mitigation plans documented
□ Residual risk acceptable

COMPLIANCE ASSESSMENT
□ Regulatory requirements reviewed
□ Policy compliance verified
□ Legal review (if needed)

APPROVAL
□ Tier 1 □ Tier 2 □ Tier 3

Approved by: _____________ Date: _______
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

AI Risk Assessment Matrix

RISK ASSESSMENT MATRIX
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Risk Categories: Rate each 1-5 (1=Low, 5=High)

LIKELIHOOD FACTORS
___ Volume of AI interactions
___ Complexity of AI tasks
___ Data sensitivity level
___ User sophistication variance
___ Integration complexity
LIKELIHOOD SCORE: ___ / 25

IMPACT FACTORS
___ Financial exposure
___ Reputational risk
___ Regulatory consequences
___ Operational disruption
___ Safety implications
IMPACT SCORE: ___ / 25

RISK LEVEL
Low: Combined < 20
Medium: Combined 20-35
High: Combined > 35

COMBINED SCORE: ___
RISK LEVEL: ___________

REQUIRED CONTROLS
Low: Standard monitoring
Medium: Enhanced oversight, periodic review
High: Full governance review, continuous monitoring
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Part 6 of 8

Part 5: Monitoring Dashboard (5 minutes)

Define Key Metrics

AI GOVERNANCE DASHBOARD
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

USAGE METRICS (Monthly)
• Active AI use cases: ___
• Users accessing AI: ___
• Queries/interactions: ___
• Token/API usage: $___

QUALITY METRICS
• Average quality score: ___
• Human override rate: ___%
• Escalation rate: ___%
• User satisfaction: ___

COMPLIANCE METRICS
• Policy violations: ___
• Incidents (by severity): ___
• Pending assessments: ___
• Training completion: ___%

RISK INDICATORS
• High-risk use cases: ___
• Open issues: ___
• Overdue reviews: ___

TRENDS
[Include month-over-month changes]

ACTION ITEMS
1. [Priority item]
2. [Priority item]
3. [Priority item]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Part 7 of 8

Deliverable

Create a complete AI Governance Package:

  1. Governance Structure Document

    • Organizational chart
    • Roles and responsibilities
    • Decision rights matrix
  2. Policy Documents

    • Acceptable Use Policy
    • Procurement/Development Policy
    • Incident Response Policy
  3. Assessment Tools

    • Use Case Assessment Checklist
    • Risk Assessment Matrix
  4. Monitoring Framework

    • Dashboard template
    • Key metrics definitions
    • Reporting schedule
  5. Implementation Roadmap

    • Priority order for rollout
    • Quick wins vs. long-term items
    • Success criteria

Part 8 of 8

Extension Challenge

Socialize the Framework

  1. Create a 5-slide presentation summarizing the framework
  2. Identify the top 3 objections stakeholders might have
  3. Prepare responses to each objection
  4. Practice presenting with AI playing devil's advocate

Document how you would roll this out in your organization.

Ready to Complete?

You've viewed 0 of 8 sections. Mark this lab as complete when you're done.