A diagnostic guide for spotting decay in experimentation culture

A broken testing program rarely looks broken.
It usually looks busy.

You’ll see dashboards with colorful metrics, Slack channels buzzing with test IDs, and quarterly decks boasting a dozen new experiments. But beneath that surface, the culture might already be fading. The silence isn’t in the data. It’s in the conversations.

When experimentation turns into a box-checking exercise, you can feel it before you can prove it. The spark dulls. The curiosity dries up. The work still happens, but the learning slows to a crawl.


1. The first signs of decay

The earliest signs of a dying testing culture aren’t in the numbers. They’re in how people talk.

Healthy experimentation teams can’t stop talking about their findings. They share the odd results. They argue about what went wrong. They celebrate surprises. It’s a noisy, messy kind of curiosity.

When that noise disappears—when meetings turn into simple metric reviews or endless dashboards of test volume—the culture is quietly flatlining.

Another sign is comfort.

If seventy percent or more of your tests are “winners,” you’re not running an effective program. You’re running a validation engine. A seventy percent win rate means the team is testing what it already believes. No risk. No stretch. Just confirmation.

And that safety isn’t benign. It’s fear. Fear of failure. Fear of a negative test that someone will have to explain to leadership. Fear that curiosity won’t be rewarded.

Experimentation thrives on uncertainty. Once a team starts designing only the tests they can win, the program stops being an engine for insight and becomes a compliance task.


2. Start with conversation, not code

When I’m called to audit a testing program, I don’t start by looking at data. I start by listening.

I want to know how people talk about experimentation. Who’s included? Whose voices are missing? If only a small inner circle is “allowed” to run tests—usually marketing or R&D—the culture’s already too narrow. Real experimentation cultures are participatory. They belong to everyone who touches the customer, not just the analytics team.

Then I look at governance.

Do people understand experimental theory, or are they just throwing variants into the wild and hoping significance will sort it out? Are there real control groups, or just a dozen “test” branches competing for oxygen?

And the big one—does the program have a learning loop? If insights aren’t feeding into future design, that’s not experimentation. That’s activity.

The real audit isn’t about catching mistakes. It’s about mapping the gap between the science of testing and the human behavior inside it.


3. Build a checklist for truth-telling

Every experimentation culture needs its own internal mirror. I keep a mental checklist that I use on nearly every audit. It’s not fancy, but it’s effective.

Communication: Is experimentation a shared language, or a specialized dialect that only analysts speak?

Governance: Is there clarity around who designs, reviews, and approves tests?

Control integrity: Are baselines stable, or are control groups constantly shifting?

Learning loops: Do results feed the next experiment, or vanish into a deck no one reads?

Cultural tone: Do people feel safe to fail, or is “significance” the only safe answer?

Curiosity signals: Are people still asking “why?”

Rigor meets imagination: Are the statistical checks and creative ideas keeping pace with each other?

If I find more than two of these in the red, the program isn’t broken—it’s just ungoverned. But the fix starts with honesty.


4. Balance numbers with narrative

Numbers alone don’t tell the story of a culture.

You can have a flawless win rate and a lifeless team. You can have excellent dashboards and zero curiosity.

That’s why I look for narrative in the data. I sometimes analyze the text of experiment descriptions—how people frame the problem, what nouns and verbs they use, even which pronouns dominate. You can see a lot about a team’s mindset in how they write about change.

If every experiment starts with “We want to prove…” instead of “We want to learn…,” that’s a signal. If the tone reads like a marketing report instead of an inquiry, that’s another.

Data will always tell you what happened. The narrative reveals how people thought while it happened. Both matter.


5. Fix small things first

Once the audit shows what’s broken, I never start with sweeping reform. I start small.

Fix the basics. Tighten the process. Restore meaningful control groups. Clarify ownership. Clean up old experiments that are still running out of habit.

Then focus on the emotional side of safety. Safety is the death of experimentation, but fear is its poison. The trick is finding a middle ground—one where it’s safe to try bold ideas and fail without judgment.

I talk about the “art of the possible.”
Show leaders what could happen if experimentation worked the way it was meant to—how it can serve as a risk buffer, not a risk creator. When executives see experimentation as a safer path to big wins, they become allies instead of skeptics.


6. Reignite enthusiasm

The hardest part isn’t the math. It’s the energy.

You can fix processes, add guardrails, even hire new analysts. But you can’t fake enthusiasm. When people have grown cynical about testing—when every failed experiment feels like wasted time—you’re dealing with burnout, not bandwidth.

That’s when you need a spark. Sometimes that comes from fresh hires who still believe in discovery. Sometimes it comes from sharing one strong, unexpected win. Once curiosity starts to return, the culture begins to hum again.

You’ll hear it in the language. People start saying, “Let’s test it,” with interest instead of exhaustion. They start cross-posting results. They ask better questions. The data team stops defending itself and starts storytelling again.

That’s when you know the program’s back on its feet.


7. How to tell you’re healthy again

A healthy experimentation culture doesn’t need a health check. You can feel it.

  • People share insights more than outcomes.  
  • Leaders ask for hypotheses instead of guarantees.
  • Control groups are protected, not ignored.
  • Failures are published, not hidden.
  • Curiosity feels contagious again.

If you see those signals, your testing program isn’t just repaired—it’s alive.


Final thought

When I walk into a struggling experimentation team, I tell them this: a broken testing program doesn’t mean a broken company. It just means curiosity lost its seat at the table for a while.

The fix isn’t perfection. It’s permission. Permission to test again, to fail again, and to talk about what you found.

That’s the real audit—listening for the sound of discovery coming back to life.

Leave a comment

What is Uncanny Data?

Uncanny Data is a home for evidence-based experimentation, synthetic audience modeling, and data-driven strategy with a touch of irreverence.
We help teams uncover insights that drive real decisions, not just dashboards.