Stakeholder Conversations: Observed

20 min analyze 4 sections
Step 1 of 4

WHY WHY This Matters

There's a gap between knowing frameworks and using them under pressure.

Research on professional development reveals: The most effective learning happens through observation before action. Junior consultants didn't learn stakeholder management by reading about it—they learned by watching seniors navigate difficult conversations, noticing the timing, the tone, the recovery moves that never made it into any playbook.

Before you practice stakeholder conversations in Lab 8, you'll observe how experienced operators handle the exact scenarios you'll face. Watch for:

  • Timing — When they pause, when they push
  • Tone shifts — How they modulate between firm and flexible
  • Recovery moves — How they handle mistakes and surprises
  • Unspoken signals — What's happening beneath the words

This is the "tacit knowledge" that separates competent practitioners from true professionals.


Step 2 of 4

WHAT WHAT You'll Observe

Case Study 1: The Skeptical CFO

Context: Maya, an AI program lead at a manufacturing company, is pitching a $200K pilot to automate quality inspection reports. The CFO, David, rejected a similar proposal 18 months ago.


[CONVERSATION TRANSCRIPT]

David (CFO): I have 15 minutes before my next call. Tell me why I should fund this.

Maya: Thanks for making time, David. Before I pitch anything, I want to understand what happened with the automation proposal last year. I heard it didn't go well.

[OBSERVATION: She doesn't pitch immediately. She invites him to share his concerns first—this surfaces objections and shows respect for his experience.]

David: It was a disaster. The vendor promised 95% accuracy, delivered maybe 60%. We spent four months and $150K, and the operations team refused to use it because they couldn't trust the outputs.

Maya: That sounds frustrating. And expensive. What was the failure mode—was it the technology, the implementation, or the use case itself?

[OBSERVATION: She asks for specifics without being defensive. "What was the failure mode" is diagnostic, not dismissive.]

David: Honestly? Probably all three. The vendor oversold, we didn't have the right data, and nobody on the floor wanted another system to check.

Maya: So if I'm hearing you right, you're skeptical of AI vendors, skeptical of our data readiness, and concerned about adoption. Is that fair?

[OBSERVATION: She summarizes his concerns back to him. This signals she's listening, not just waiting to talk.]

David: That's about right. So why should this time be different?

Maya: Maybe it shouldn't be. Can I show you what's actually different, and you tell me if it changes your math?

[OBSERVATION: "Maybe it shouldn't be" is surprising—she's not fighting his skepticism, she's working with it. The permission ask ("Can I show you...") is subtle but effective.]

David: Go ahead.

Maya: Three things are different. First, we're not buying a platform—we're building a narrow solution for one report type, using foundation models. If it doesn't work, we've spent $40K and two months, not $150K and six months.

Second, I've already worked with the operations team. I have three supervisors who want to pilot this because they're drowning in report writing. They're volunteers, not conscripts.

Third, and this is the key one—the kill criteria are built in. If we don't hit 85% accuracy on the pilot set by week six, we stop. No sunk cost fallacy, no "let's give it more time." We stop.

David: Who sets the 85% threshold?

Maya: Operations does. It's their definition of "good enough to review rather than rewrite." I'm not defining success—they are.

[OBSERVATION: She anticipated the follow-up and had a principled answer. The authority to define success sits with the users, not her.]

David: What happens if you hit 85% but the team still won't use it?

Maya: Then we've learned that adoption is the problem, not accuracy, and the pilot still succeeded. That's a different problem to solve.

David: (pause) What do you need from me?

[OBSERVATION: The pause and open-ended question signal a shift. He's moved from resistant to curious. This is the moment to make a clear ask.]

Maya: $40K over two months. A steering check-in at week four. And your air cover if someone outside operations questions why we're trying this again.

David: Send me the one-pager. I'll look at it this weekend.


[DEBRIEF]

What Maya did well:

  1. Led with listening — She asked about the past failure before pitching
  2. Validated concerns — She didn't argue with his skepticism
  3. Made the risk concrete — $40K/2 months vs. $150K/6 months
  4. Built in the exit — Kill criteria were pre-defined and externally validated
  5. Made a specific ask — Money, time, and "air cover"

What to notice beneath the surface:

  • She never said "this time AI is better." She said "this time the bet is smaller."
  • She shifted authority to operations ("they define success, not me")
  • She positioned stopping as a valid outcome, not a failure

Case Study 2: The Burned IT Leader

Context: Jason is proposing an AI assistant for the legal team. The CTO, Priya, led an AI chatbot project that failed publicly two years ago.


[CONVERSATION TRANSCRIPT]

Priya (CTO): I saw your proposal. I have concerns.

Jason: I'd expect you to. This is your area, and the chatbot project was rough. Can you tell me what specifically worries you about this one?

[OBSERVATION: He doesn't say "I know you're skeptical because of the chatbot." He invites her to name her own concerns. Subtle difference, but it gives her control.]

Priya: The chatbot was supposed to handle customer questions. It hallucinated answers, made up policies, and we had to pull it after three weeks. My credibility took a hit. I'm not interested in repeating that.

Jason: That makes sense. A public-facing system that invents answers is about the worst failure mode there is. Can I ask—what would have prevented that?

[OBSERVATION: He validates her experience ("that makes sense") and then asks a forward-looking diagnostic question.]

Priya: Better testing. A way to constrain what it could say. Humans in the loop before anything went to customers.

Jason: Those are exactly the controls I'm proposing for this. But I realize "trust me, this time it's different" is a weak argument. What would convince you?

[OBSERVATION: He names the elephant—"trust me" isn't enough—and asks what evidence she'd accept.]

Priya: Honestly? I'd want to see it break. I'd want to see what happens when someone tries to make it hallucinate, and what the fallback is.

Jason: Done. Before we go to the legal team, you stress-test it. Red team it. If you can make it fail in a way that's unacceptable, we don't proceed. You have veto power.

[OBSERVATION: He gives her control and positions her as the gatekeeper, not the blocker.]

Priya: (surprised) You'd give me veto power?

Jason: You're the one who'll be accountable if it fails again. And frankly, if it can't survive your scrutiny, it shouldn't go to the legal team anyway.

Priya: (pause) Alright. Set up the red team session. But I want to pick the scenarios.

Jason: I'd expect nothing less.


[DEBRIEF]

What Jason did well:

  1. Acknowledged history — He didn't avoid the chatbot failure
  2. Asked diagnostic questions — "What would have prevented that?"
  3. Named the trust problem — "Trust me" isn't an argument
  4. Gave control away — Red team session, veto power, scenario selection
  5. Positioned her as gatekeeper — She's protecting quality, not blocking progress

Key recovery move:

  • When she said "I'd want to see it break," he didn't get defensive. He leaned in: "Done. You stress-test it." This turned a threat into an alliance.

Case Study 3: The Budget Guardian (Job Security Conversation)

Context: Elena is implementing AI-assisted customer support. The VP of Customer Success, Marcus, is worried about his team's jobs.


[CONVERSATION TRANSCRIPT]

Marcus (VP): I'll be direct. My team is scared. Before we talk about metrics, let's talk about people.

Elena: Fair. I should have led with this. Let me be clear about what this initiative does and doesn't mean for your team.

[OBSERVATION: She accepts the reframe immediately—"I should have led with this" is an acknowledgment, not an apology.]

Marcus: Go on.

Elena: This is not a headcount reduction project. I know that's what everyone assumes when they hear "AI efficiency." But look at what we're actually automating: first-response triage, FAQ lookups, and routing. That's the stuff your experienced people hate doing.

Marcus: They also worry that once the AI handles triage, leadership will decide we don't need as many people.

Elena: That's a legitimate concern. Here's the commitment I can make: for the duration of this pilot, no headcount decisions will be made based on efficiency gains. If the pilot succeeds, the conversation about team size happens with you in the room, not about you in a room.

[OBSERVATION: She doesn't promise no job changes ever—that would be dishonest. She commits to what she can control: timeline and inclusion.]

Marcus: And if leadership decides differently?

Elena: Then you and I both escalate. I'm not interested in being the efficiency play that cuts your team. If that's where this is going, I want to know too.

[OBSERVATION: She aligns herself with him against a hypothetical bad outcome. They're on the same side.]

Marcus: What's in this for my team?

Elena: Right now, your experienced people spend 30% of their time on things a junior person could do—if you had more juniors. What if they got that time back for the hard cases? The ones where they actually use their expertise?

And honestly, Marcus—the people who get good at working with AI are going to be more valuable, not less. I'd rather your team be the ones who figure out how to do this than have some other department claim AI competency.

Marcus: (pause) Who from my team would be involved in the pilot?

[OBSERVATION: The question shift—"who would be involved"—signals he's moved from resistant to engaged.]

Elena: I'd want your input on that. Probably two people: one who's skeptical and will push back, one who's curious and will experiment. Both perspectives matter.

Marcus: I have people in mind for both.


[DEBRIEF]

What Elena did well:

  1. Accepted his frame — "Let's talk about people" became the conversation
  2. Made bounded commitments — What she can control, not universal promises
  3. Aligned as allies — "If that's where this is going, I want to know too"
  4. Reframed value — From "efficiency" to "expertise leverage"
  5. Shared control — He picks the pilot participants

What to notice beneath the surface:

  • She never said "no one will lose their job"—that would be a lie she can't guarantee
  • She made the threat external ("if leadership decides differently") rather than internal
  • She offered a counter-narrative: AI competency as career protection

Step 3 of 4

HOW HOW to Apply This

Analysis Exercise

Tacit Knowledge Extraction

Self-Check


Practice Exercises

Review the three conversations above and answer:

1. Patterns Across All Three What techniques appeared in all three conversations?

  • Asking about past experiences before pitching
  • Giving the skeptic control over something
  • Making bounded commitments (what they can control)
  • Naming the elephant in the room
  • Proposing clear exit criteria

Which other patterns did you notice?

2. Recovery Moves Each conversation had a moment where the operator could have gotten defensive but didn't:

  • Maya: When David asked "why should this time be different"
  • Jason: When Priya said "I'd want to see it break"
  • Elena: When Marcus asked "what if leadership decides differently"

For each: What did they do instead of getting defensive? What would a defensive response have looked like?

3. Your Turn Imagine a fourth scenario: You're proposing AI-assisted code review to a VP of Engineering who:

  • Has 15 years of experience
  • Prides themselves on code quality
  • Believes "AI can't understand context like humans do"
  • Had a junior engineer recently introduce a bug that AI might have caught

Write the opening 4 exchanges of this conversation. Apply the patterns you observed:

  • How do you lead?
  • What question do you ask first?
  • How do you acknowledge their expertise without flattery?

The hardest skills to learn are the ones experts can't easily explain. Review the conversations and extract:

1. Timing Signals When did the operators know the stakeholder was shifting from resistant to open? What signals did they read?

2. "Moves" That Aren't in Frameworks List 3 things the operators did that you wouldn't find in a typical "objection handling" guide:




3. What They Didn't Say Sometimes what's not said matters more. For each operator, identify one thing they could have said but deliberately didn't:

  • Maya avoided saying: ________________________________
  • Jason avoided saying: ________________________________
  • Elena avoided saying: ________________________________

Why do you think they made those choices?

Step 4 of 4

GENERIC Up Next

You've observed how experienced operators navigate difficult stakeholder conversations. Now it's your turn to practice.

Lab 8: Stakeholder Simulation — Apply what you've observed in real-time conversations with AI-powered stakeholder personas. You'll face the same types of skeptics, use the patterns you've learned, and develop your own style through practice.

Module Complete!

You've reached the end of this module. Review the checklist below to ensure you've understood the key concepts.

Progress Checklist

0/6
0% Complete
0/4 Sections
0/0 Concepts
0/2 Exercises