Plan for learning, not certainty
We spend time planning to avoid time adjusting. But great products emerge from trying things, not gathering requirements.
Engineers want autonomy over how to build. But when it comes to what to build, they often want someone else to own the risk of being wrong. Nobody wants to spend months building something that fails. In a system that rewards delivery over iteration, requirements feel like necessary protection.
Great products are the result of trying things and learning. Mistakes are part of the process, but we tend to view them as failures we could have prevented. Instead, we gather requirements, investing time planning to avoid time adjusting. We aim for the most certainty when we have the least understanding.
A better approach is to embrace iteration, not avoid it. What matters most is building and adjusting, with clear direction and guardrails to keep mistakes small and learning fast. Instead of waiting for false certainty, make your guesses explicit and start testing them together.
Requirements build the wrong thing slowly
Requirements assume we can get the answers at the beginning. They're written before we've built or tested anything, or learned from real users. We document what we think needs to happen, treat those documents as truth, and then wonder why reality doesn't cooperate.
In the early stages, thinking about requirements isn't inherently bad. They can help organize complex problems or align teams early on. The problem is treating rough ideas as commitments instead of guesses, optimizing for the plan instead of the outcome.
Thorough planning is also slow. Designers wait for PMs to write a product vision, and engineers wait for designers to create prototypes. Each team needs the previous phase to be "done" before they begin. We optimize for a set of approved decisions, not shared knowledge. Requirements become frozen guesses dressed up as answers.
We fix this by having teams try to work in parallel, but they still don't work in concert. Engineers never talk to customers and don't engage on designs. Designers don't learn about technical constraints until it's too late to matter. Product doesn't benefit from either perspective until the work is already defined.
The further we go down this expensive road, the more it costs to change course. If only we'd gathered better requirements. But feedback isn't failure, it builds understanding. The solution isn't better requirements, it's to plan for learning.
Hypotheses make better decisions faster
A hypothesis is an honest guess about what we don't know. It makes our assumptions explicit. Say you're working on a healthcare app. A hypothesis might sound like: "Nurses struggle to find patient records quickly because the search requires an exact name match."
Notice what this includes: who's affected, what's wrong, and why we think it's happening. That last part is key: if we're wrong about the cause, we can learn that before we build anything.
That's different from a requirement that says "Build fuzzy search for patient names." The requirement is a solution we're committed to. The hypothesis is a guess we can test in a variety of ways. Maybe a better design uses numbers or QR codes instead of searching for names at all.
Not everything needs testing
Every project has constraints, things that must be true regardless of what you learn. These may include customer deadlines, tech stacks, budgets, security controls, accessibility standards, or performance targets. These are non-negotiable boundaries within which you explore and iterate.
The mistake is treating everything like a constraint. When teams call user preferences, feature ideas, and solution approaches "requirements," they freeze guesses that should stay flexible. The real work is distinguishing what you know from what you're guessing.
Know which decisions are reversible
Not all decisions deserve the same level of planning. Decisions that are hard to revert deserve careful thought. Choosing a database architecture, selecting a cloud platform, or defining your data model all have lasting consequences.
But most decisions are easy to reverse. UI layouts, feature prioritization, user flows can all change quickly based on what you learn. To avoid mistakes, teams often treat every decision with a gravity it doesn't deserve, planning extensively for things that are cheap to change.
For reversible decisions, experiment faster. Test your guesses, learn what works, and adjust. Save your planning energy for the decisions that actually matter.
Validate your guesses together
Once you have a hypothesis, the work shifts from building features to achieving outcomes. Figure out how to validate your guess and whether it's worth doing.
Instead of gated reviews and approvals, everyone works together from the start. Engineering, design, and product collaborate on the hypothesis and validation, not in sequence. Technical constraints inform what you test. Customer needs shape what you build.
Start with these questions:
What does success look like?
Maybe it's search times under 2 seconds. Maybe it's reducing failed searches by half. Focus on the outcome you want, not the feature you'll build.
How will we know if it works?
Decide what you'll observe or measure. Time per search? Failure rates? User feedback? Pick the signals that matter.
What's the smallest way to find out?
Instead of building the full feature, can you test with a script and a spreadsheet? Watch users with the current system? Build a rough prototype? Get the learning as cheaply as possible.
Is this worth it?
Not every hypothesis matters equally. Some are critical. Others are just interesting. Define how long you'll spend and who needs to be involved. Check progress as you go and adjust.
This doesn't mean everything becomes an experiment. Sometimes the answer is obvious, in which case just do the work. But when you're uncertain, start by defining success and learning how to get there.
Document for conversation, not knowledge
Documents are critical, especially on remote teams, as they're how you communicate and align. But they're a medium for sharing ideas, not the goal itself. When you're moving quickly, keeping documents up to date can become a distraction instead of adding value.
At the start, write down the problem, known constraints, and what success looks like. Capture who's involved and what role they play: approver, advisor, spectator. Then focus your time on prototyping and learning, not writing.
As you validate ideas, write down what you learned: options you considered, decisions you made, and why. Keep it lightweight. Don't produce too many documents or spend too long creating them. Work together on a small set of documents instead of fragmenting into many. Modern tools make it easy to collaborate without waiting for handoffs.
Good working documentation facilitates shared understanding and conversation, not comprehensive knowledge. Treat documents as living artifacts to iterate on, not something to finish. If you're spending more time writing than learning, something's wrong.
Testing our way to 6 million users
When Daniel Burka and I built Simple, we couldn't have written requirements for an offline-first mobile app for healthcare workers. We'd never visited rural India, let alone a medical clinic, and we definitely didn't know what nurses wanted. Our goal wasn't software, it was empowering a public health program for hypertension.
We started by spending two weeks visiting clinics in person and learning how they worked. Their paper records were slow to find and captured redundant, unused information. Nearly every nurse had a mobile phone, but they only had about 4 minutes with each patient. If the app was slow or hard to use, they had no use for it. We heard the same feedback over and over:
"Just don't make my life harder."
This led to our hypothesis: "Nurses will use a mobile app to record patient data if it's faster than their current paper system." To validate it, we hacked together a clickable prototype and watched nurses use it.
We learned two critical things. First, the app needed to work offline. Network access at the clinics was poor, but nurses carried their personal phones everywhere and would have connectivity as they traveled. Second, patient visits needed to be logged in under 15 seconds. Any longer and nurses said they'd skip it entirely and the data would be lost.
These insights came from testing, not planning. Validating our hypothesis narrowed our focus to building what mattered most. We launched a mobile app in just 4 months. That 15-second threshold became one of our guiding principles. Simple now serves over 6 million patients.
Start with hypotheses. Make a plan to learn what's true. Build using what you discover. Repeat.
What this takes
Moving from requirements to hypotheses requires trust. Trust that your team can figure things out together, that learning beats being right on the first try, and that real feedback beats perfect planning.
It also requires discipline. Discovery can sprawl, so constrain the work by defining clear timelines and success criteria. Pick a decision owner. Check in often, and know when to stop exploring and start building.
Eventually, discovery has to converge. When you've learned enough, summarize what you know. That's when documentation starts to serve you again. Before launch, you'll need acceptance criteria, interface contracts, and operational readiness. But those emerge from learning, not planning.
Requirements feel safer because they create the illusion of control, but the work will change. You'll learn things that invalidate your assumptions. The question is whether you design for that from the start or fight it the whole way.
The next time you start a project, don't ask for requirements. Ask what outcome you need to deliver and begin learning how to make it happen together.
Thank you to Faye Cheadle, Liz Hustedt, Jim Van Fleet, and Katie Hoffman for helping me edit and refine my thoughts for this article.
 
        