Skip to content
Leadership Garden Leadership
Garden

The Power Of “I Don’t Know”

8 min read
The Power Of “I Don’t Know”
Table of Contents

Two weeks into my role as CTO at a Series B startup, our largest partner threatened to leave. Their API calls were failing, and my team was scrambling. In the emergency meeting, my mind raced to say something authoritative, something that would inspire confidence.

Instead, I said: “I don’t know what’s causing this – yet. Let’s find out together, I know we can.”

That admission changed everything. The room’s energy shifted from panic to focused problem-solving. A junior engineer mentioned seeing similar patterns in our logging system last week. A senior architect pulled up performance metrics showing gradual degradation over the past month. Within hours, we identified a subtle race condition in our caching layer – something we would have missed if we’d rushed to implement the first “obvious” solution.

This moment taught me more about leadership than a decade of management books.

The Science of Uncertainty and Decision-Making

A groundbreaking study from the University of Utah, published in Physical Review E in 2024, validates what many thoughtful leaders have long suspected: quick decisions often come from bias, not insight. The research, led by mathematician Samantha Linn, revealed that early decisions correlate strongly with pre-existing biases, while slower, more deliberate choices show higher accuracy rates.

This isn’t just academic theory. In tech leadership, I’ve watched this play out countless times:

Quick Decision Disasters:

  1. The “Just like Google” Syndrome
    • VP pushing microservices because “that’s how Google scales”
    • Ignored differences in team size, skill sets, and actual needs
    • Result: $2M spent, 6 months wasted, system more complex than ever
  2. The Feature Factory Fallacy
    • Product team “certain” about user needs
    • Built 47 features in 6 months based on assumptions
    • Reality: 87% of features never used more than once
  3. The Architecture Ego Trap
    • Senior architect refused to consider scaling concerns
    • “The design is solid, trust me”
    • Resulted in a 4-hour outage during peak traffic

The Leadership Paradox

Let’s examine what happens when we lean into uncertainty, backed by both research and real-world experience.

The Utah Study’s Key Findings

The research reveals something fascinating about decision-making patterns. Those who made quick decisions showed strong alignment with their initial biases, regardless of available evidence. In contrast, slower decision-makers demonstrated:

  • Higher accuracy rates
  • Better integration of new information
  • More balanced evaluation of alternatives
  • Lower influence from initial biases

This maps perfectly to what I’ve seen leading engineering teams.

Real-World Examples

The Database Migration Success Story

At a previous company, we faced critical scaling issues. The “obvious” solution was to switch to the trending NoSQL database everyone was talking about. Instead of rushing in, we said: “We don’t know enough yet.” Here’s what happened:

  1. Initial Response
    • Mapped current pain points
    • Ran comprehensive load tests
    • Interviewed teams with similar scale
    • Built small proof-of-concepts
  2. What We Found
    • Main issues were in our query patterns
    • Existing database was using only 30% of its capabilities
    • Team needed query optimization skills more than new tech
  3. Final Outcome
    • Optimized existing system
    • 10x performance improvement
    • No migration needed
    • Saved 8 months of potential disruption

The Product Market Lesson

An even more dramatic example came from our US market expansion:

The Confident Approach (First Attempt):

  • Assumed market needs
  • Copied European playbook
  • Result: $2.7M lost, product rejected

The Uncertain Approach (Second Attempt):

  1. Started with Questions
    • “We don’t know this market”
    • “We don’t understand local needs”
    • “We need to learn before building”
  2. Actions Taken
    • Hired local experts
    • Ran small pilots
    • Gathered usage data
    • Iterated based on feedback
  3. Results
    • Revenue positive in 4 months
    • 92% customer satisfaction
    • 3x faster market penetration

Building a System Around Uncertainty

I needed a way to systematize this approach. Random uncertainty isn’t helpful – we need structured uncertainty. This led to developing the following system (or may I say “framework”?)

Why Another Framework?

Most decision-making frameworks assume you have all the information you need. But in modern engineering leadership, that’s rarely true. CLEAR is built specifically for situations where uncertainty is high and the cost of being wrong is significant.

The Five Elements of CLEAR

Context (What We Know)

  • Verified facts and data
  • Current metrics and KPIs
  • Documented constraints
  • Historical precedents

Example: When facing a microservices decision, Context means listing your actual scaling problems, current performance metrics, team size, and existing architecture limitations.

Limitations (What We Don’t Know)

  • Identified knowledge gaps
  • Key assumptions
  • Potential risks
  • Resource constraints

Example: For our US market expansion, Limitations included understanding of local user behavior, regulatory requirements, and technical infrastructure differences.

Exploration (How We’ll Learn)

  • Research methods
  • Experiments to run
  • Data to collect
  • People to consult

Example: During our database migration decision, Exploration involved load testing, proof-of-concepts, and consulting teams who’d made similar transitions.

Action (What We Do Now)

  • Immediate next steps
  • Quick experiments
  • Risk mitigation
  • Learning activities

Example: When evaluating a new frontend framework, Action meant building small prototypes, running performance tests, and documenting developer experience.

Review (How We Check Progress)

  • Success metrics
  • Check-in schedule
  • Pivot criteria
  • Learning documentation

Example: For our product localization efforts, Review included weekly market feedback sessions, monthly performance metrics, and clear criteria for expanding or pulling back.

Type 1/Type 2 Decisions

After introducing this framework, we had to address a crucial question: When should we use it? Not every decision warrants this level of structured uncertainty. Here’s where Jeff Bezos’s Type 1/Type 2 framework becomes invaluable.

Type 1 Decisions: When Stakes Are High

These decisions are nearly irreversible, high risk or have long-lasting implications. They’re the ones that keep engineering leaders up at night:

  • Architecture Foundations
    • Moving to microservices
    • Database technology choices
    • Cloud provider selection
    • Core API design
  • Team Structure
    • Reorganizations
    • Location strategy (remote/hybrid)
    • Core process changes
    • Hiring strategies
  • Product Direction
    • Platform decisions
    • Market focus
    • Core feature removals
    • Major pricing changes

For Type 1 decisions, the CLEAR framework isn’t just helpful—it’s essential. Taking time here isn’t indecision; it’s wisdom.

Type 2 Decisions: When Speed Matters

These are reversible choices where perfect information isn’t necessary:

  • Sprint Planning
    • Task prioritization
    • Story point estimates
    • Development tool choices
    • Minor feature tweaks
  • Team Operations
    • Meeting schedules
    • Code review processes
    • Documentation formats
    • Development environment setup

For Type 2 decisions, light application of CLEAR might be enough, or you might skip it entirely.

The Psychology of Safe Uncertainty

Building on the previous sections, let’s dive deep into what might be the most critical element: creating an environment where uncertainty becomes a strength rather than a weakness.

Understanding Psychological Safety

The term “psychological safety” was coined by Harvard’s Amy Edmondson, but I’ve seen it play out daily in engineering teams. It’s not just about feeling comfortable—it’s about creating an environment where:

  • Engineers question assumptions without fear
  • Junior team members challenge senior architects
  • Failed experiments are seen as valuable data
  • “I don’t know” is the start of discovery, not a sign of weakness

The Three Levels of Safety

1. Personal Safety

  • What It Looks Like:
    • Speaking up about knowledge gaps
    • Admitting mistakes early
    • Asking “obvious” questions
    • Challenging popular opinions
  • How to Build It:
    • Leaders go first with vulnerability
    • Reward early problem identification
    • Share your own learning journey
    • Make “I don’t know” a strength signal

2. Team Safety

  • What It Looks Like:
    • Open discussion of technical debt
    • Honest project status updates
    • Proactive risk identification
    • Collaborative problem-solving
  • How to Build It:
    • No blame post-mortems
    • Celebrate identified risks
    • Share credit for successes
    • Joint exploration of unknowns

3. Organizational Safety

  • What It Looks Like:
    • Transparent decision-making
    • Open discussion of failures
    • Cross-team learning
    • Systemic problem-solving
  • How to Build It:
    • Public learning repositories
    • Cross-team retrospectives
    • Open architecture reviews
    • Shared assumption tracking

Common Barriers to Safety

1. The Expert Trap

When senior engineers feel they must always have answers:

  • Solution: Celebrate senior engineers who say “let’s figure this out together”
  • Example: Our principal engineer starting architecture reviews with “Here’s what I’m uncertain about…”

2. The Speed Illusion

When teams feel pressure to have immediate answers:

  • Solution: Show data on how rushed decisions cost more time
  • Example: Our “Decision Speed vs. Outcome Quality” dashboard

3. The Status Game

When knowledge becomes a power tool:

  • Solution: Reward teaching and knowledge sharing over knowing
  • Example: Our promotion criteria explicitly valuing mentorship, the capability to cope with the unknown and solid documentation

Creating Safe-to-Fail Environments

Bounded Experiments

  • Set clear learning objectives
  • Define acceptable failure parameters
  • Create safe testing environments
  • Document learning outcomes

Progressive Disclosure

  • Start with small risks
  • Build trust through transparency
  • Increase scope gradually
  • Maintain consistent feedback loops

Handling Resistance and Objections

Let’s address the common pushback you’ll face when implementing this approach:

“We Don’t Have Time for This”

The Objection:

“Moving fast is our advantage. We can’t slow down for all this process.”

The Response:

  • Share real data about rework costs
  • Show metrics on rushed decision failures
  • Present the Utah study findings about bias and speed
  • Demonstrate how structured uncertainty actually speeds up good decisions

Example:

“Last quarter, rushed decisions cost us 442 engineering hours in rework. Using a systematic approach on similar decisions, this quarter reduced that to 64 hours.”

“Leaders Should Have Answers”

The Objection:

“The team looks to us for direction. Saying ‘I don’t know’ undermines confidence.”

The Response:

  • Share examples of respected leaders embracing uncertainty
  • Show how false certainty damages trust
  • Demonstrate improved team engagement metrics
  • Present case studies of successful uncertain exploration

Conclusion: The Power of Structured Uncertainty

The research is clear: embracing uncertainty leads to better decisions. But more importantly, it builds stronger, more resilient teams.

Remember:

  • Quick decisions often mask bias
  • Structured uncertainty beats rushed certainty
  • Teams thrive when exploration is safe
  • Leadership means finding answers, not having them

Your challenge: Tomorrow, when faced with a complex decision, resist the urge to have an immediate answer. Instead, say: “Let’s explore this together”.

Then watch how your team transforms.

Because in the end, the best leaders aren’t those who know everything – they’re the ones who know how to figure out anything.

Share

Explore further

Keep going with a few related posts, then branch into the topic hubs and collections around the same ideas.

Continue with these