Linus’s Law
Given enough eyeballs, all bugs are shallow.
Born from the open-source movement, this law posits that the difficulty of finding a bug is inversely proportional to the number of skilled people looking at the code. A bug that seems impossibly obscure to a single developer becomes obvious when exposed to multiple perspectives. Software quality is a function of collective scrutiny, as every individual developer has cognitive biases and blind spots.
Why it happens:
- The Author’s Blind Spot: The developer who wrote the code is often the least qualified to find its flaws. They review the code through the lens of their original intent, reading what they meant to write, not what they actually wrote. A fresh set of eyes is free from this bias.
- Diversity of Expertise: A single developer cannot be an expert in everything. A “shallow” bug to a security engineer (e.g., an injection vulnerability) might be invisible to a developer focused solely on business logic. Different reviewers bring different lenses—performance, accessibility, database efficiency, security—each capable of spotting a class of bug others would miss.
- Distributed Cognitive Load: Thoroughly vetting a piece of code is mentally taxing. Distributing this effort across multiple reviewers allows for a more comprehensive analysis. One person might focus on the high-level logic, another on edge cases, and a third on style and consistency, leading to a more robust review than any single person could manage alone.
What to do about it:
- Mandate Rigorous Code Reviews: Make peer review a non-negotiable part of your development process. Set a clear standard (e.g., “no code merges without two approvals”) and treat it as a critical quality gate, not a rubber-stamp formality.
- Cultivate a Culture of Constructive Criticism: Model and reward thorough, critical feedback. Frame reviews as a collaborative effort to improve the code, not as a personal judgment of the author. An environment where engineers are afraid to ask tough questions is an environment where bugs will thrive.
- Systematize Cross-Team Reviews: Break down team silos. Encourage or require engineers to review code from outside their immediate team. This cross-pollination not only catches more bugs by introducing diverse perspectives but also spreads knowledge and best practices throughout the organization.
- Utilize Pair and Mob Programming for Critical Code: For the most complex or high-risk parts of your system, apply Linus’s Law in real-time. Pairing (two developers) or mobbing (a whole team) on a single piece of code bakes the review process directly into creation, catching bugs the instant they are typed.
Pesticide Paradox
If the same tests are repeated over and over again, eventually the same test cases will no longer find new bugs.
Borrowing its name from agriculture, this principle states that an automated test suite, like a pesticide, loses its effectiveness over time. A suite that never changes is excellent at catching regressions—ensuring that old, fixed bugs don’t reappear. However, it becomes progressively worse at finding new or different kinds of bugs, giving a false sense of security while the underlying quality of new code may be degrading.
Why it happens:
- Implicit Adaptation: Developers begin to write code that satisfies the existing test suite. They know the specific conditions being checked and ensure their code passes, but this offers no guarantee against flaws in areas or conditions the tests don’t cover.
- Codebase Evolution: The application is not static. As new features are added and old ones are refactored, new code paths and edge cases are introduced. A static test suite does not evolve with the codebase, meaning its coverage of the current application state steadily decreases over time.
- Inurement to Specific Flaws: The test suite becomes perfectly tuned to find a specific class of bug—the ones its authors anticipated. It remains blind to entire categories of errors that were not considered during its creation, such as novel security vulnerabilities, performance regressions, or concurrency issues.
What to do about it:
- Treat Tests as Living Code: Continuously review, refactor, and expand your test suite. Every new feature must be accompanied by new tests that cover its logic. When an existing feature is changed, its tests must be updated to reflect the new reality. A “test-first” or TDD approach naturally encourages this.
- Diversify Your Testing Portfolio: Augment your unit tests with other methods designed to find different kinds of bugs. Introduce integration testing, property-based testing, and end-to-end testing. Each method attacks the code from a different angle.
- Use Production Bugs to Fortify Tests: For every bug that escapes to production, write a new automated test that reproduces it before you fix it. This ensures your test suite learns from its failures and becomes progressively stronger.
- Embrace Exploratory Testing: Schedule time for skilled humans to intentionally try to break the application. Unlike scripted tests, exploratory testing leverages human curiosity and intuition to uncover unexpected interactions and novel failure modes.
Sturgeon’s Law
90% of everything is crap.
This blunt axiom is not an expression of cynicism but a powerful filter for strategic focus. It posits that the vast majority of any creative or operational output is low-value. In software, this means that most feature ideas will fail to impact key metrics, most lines of code in a mature system are rarely executed cruft, and most agenda items in a status meeting are informational noise. The critical work is to separate this 90% of dross from the 10% of gold.
Why it happens:
- The Cost of Ideas is Zero: Generating ideas is cheap and easy, leading to a massive volume of feature requests, suggestions, and “what ifs.” The cost of implementing these ideas, however, is extremely high. Without a strong filter, this imbalance naturally leads to a backlog dominated by low-value work.
- Organizational Entropy: Systems and processes tend toward disorder. Over time, products accumulate features that were once relevant but are now obsolete. Code is added but rarely removed. Meetings are added to the calendar but never questioned. This accumulation of low-value assets creates drag on the entire organization.
- Loss Aversion: It is psychologically harder to remove something than to add it. Teams are often afraid to deprecate a feature, even one with near-zero usage, for fear of angering a handful of users. This leads to bloated, unfocused products that are difficult to maintain and confusing for new users.
What to do about it:
- Measure Everything, Prioritize Ruthlessly: Instrument your application to understand which features are actually used and which provide the most value. Use this data to build a prioritization framework that brutally favors the 10% of work that impacts core business goals.
- Practice Strategic Abandonment: Make feature deprecation and code removal a celebrated, first-class engineering activity. Regularly schedule time to identify and delete unused code, obsolete features, and pointless processes. This isn’t just “cleanup”; it’s a strategic act that reduces complexity, lowers maintenance costs, and improves focus.
- Focus on Problems, Not Features: Instead of asking “What should we build?”, ask “What is the most important user problem we can solve right now?” This reframing naturally filters out a huge number of low-value “nice-to-have” feature ideas and directs the team’s energy toward the 10% of work that truly matters to customers.
- Apply a “Value Audit” to Processes: Regularly question recurring activities. For every standing meeting, ask: “What decisions are made here that couldn’t be made asynchronously?” For every report, ask: “What action is taken based on this data?” If the answer is unclear, eliminate the activity. This frees up the organization’s most valuable resource—time—for high-impact work.