Designing for Behavior Before 'Growth' Existed
Before growth teams, before funnels, before A/B testing at scale—there was just watching.
Watching how people actually used the thing you built.
In the early 2000s, we didn't have the tooling that every startup takes for granted today. No Optimizely. No Mixpanel. No Segment. No dashboard that updated in real time.
We had log files. We had session data that took days to parse. We had customer service calls that someone transcribed by hand.
And we had our own eyes.
That constraint shaped how I think about user behavior in ways that more sophisticated tooling might have prevented.
**The Poverty of Data**
At Nike, we could see that people were buying. We could see where they dropped off.
But we couldn't tell you why.
Was the checkout button confusing? Was shipping cost too high? Was the customer interrupted by a phone call? Did they simply change their mind?
We didn't know.
That uncertainty forced a different approach:
We had to talk to people.
Not focus groups. Not survey panels. Real customers, on the phone, willing to spend twenty minutes describing what they did and why.
I remember sitting with a customer service rep, listening to a caller describe exactly where she got stuck. She wasn't angry. She was confused. The language on the site didn't match the language on her credit card statement.
We fixed that line of text.
Conversion improved 3 percent.
No A/B test. No statistical significance calculation. Just listening to one person and believing her.
That instinct—that individual behavior contains pattern-level insight—has never left.
**What Behavior Actually Teaches**
When you strip away the dashboards, the cohorts, the attribution models—what's left is simple:
People do what makes sense to them.
Not what makes sense to you.
Not what the strategy document says.
What makes sense to them.
In 2003, we noticed a pattern: People would add items to their cart, then leave. Come back days later. Add more. Leave again. Then finally purchase.
By conventional metrics, those were abandoned carts. Failures.
But the behavior told a different story:
These were researchers. They were comparing options. They were consulting someone else. They were waiting for payday.
The cart wasn't a commitment device. It was a bookmark.
That insight changed how we thought about the entire experience. We stopped optimizing for immediate conversion and started optimizing for the return visit.
We added features that made it easy to save, share, and revisit.
Conversion improved, but not because we pushed harder. Because we aligned with how people actually behaved.
**The Limits of Optimization**
Later, at Microsoft, I encountered the opposite problem: too much data.
We could measure everything. And we did.
Click rates. Time on task. Dropoff by segment. Cohort comparisons. Statistical significance at 95 percent confidence.
It was intoxicating.
But something got lost:
The question beneath the question.
We knew what people did. We didn't always know why.
And without why, optimization becomes local maxima.
You make the button greener. You move it three pixels left. You change the copy from "Buy Now" to "Get Yours."
All of it matters at the margin.
None of it matters if the fundamental proposition is wrong.
That realization—that optimization is not strategy—came from having too much data, not too little.
**The Behavior That Matters Most**
Over twenty years, I've learned that certain behaviors predict durable success better than any metric:
Repeated use. Not purchase - use. Do they come back when they don't have to?
Word of mouth. Not measured by referral links, but by genuine recommendation. Do they tell someone else without being asked?
Tolerance of friction. When something breaks, do they forgive you? That's loyalty.
Negative capability. Do they adapt when you change the experience, or do they leave?
These are not easy to instrument. You can't put them in a dashboard.
But you can feel them. You can watch for them. You can design for them.
**What Early Constraints Taught Me**
The poverty of data in those early years was actually an advantage.
It forced me to develop instincts that dashboards cannot provide:
How to watch without interfering.
How to listen without projecting.
How to trust what behavior reveals, not what people say.
When I later ran P&L at SelectBlinds, those instincts became part of how I evaluated everything:
Why is that cohort retaining better?
Why did that marketing channel stop working?
Why are margins compressing in that category?
The data told me what. The instincts told me where to look.
**For the Current Moment**
In 2026, we have more data than any human could process. AI analyzes behavior at scale. Dashboards update in milliseconds.
And yet:
The same mistakes repeat.
Teams optimize what they can measure and ignore what they can't.
They mistake activity for insight.
They forget that behavior is always rational from the inside.
The tools change. The questions do not.
Why does this make sense to them?
What are they telling us without speaking?
What would we see if we just watched?
I learned to ask those questions in a world without dashboards.
I'm still asking them today.