Skip to content
Leadership Garden Leadership
Garden

Software Engineering Laws - Coding & Development

8 min read
Series Software Engineering Laws Part 4 of 11
Software Engineering Laws - Coding & Development
Table of Contents

Knuth’s Optimization Principle

Premature optimization is the root of all evil

This principle is a direct assault on a common engineering impulse: the desire to write clever, highly performant code from the outset. It argues that this impulse is not only unhelpful but actively harmful. The time spent optimizing code before it’s been proven to be a bottleneck is time stolen from delivering value and ensuring correctness. The vast majority of significant performance gains come from architectural decisions, not from micro-optimizing individual lines of code.

Why it happens:

  • It Obscures Intent: Optimized code is frequently more complex and less readable than its straightforward counterpart. It sacrifices clarity for perceived performance gains, making the code harder to debug, maintain, and modify in the future.
  • It’s Based on Guesswork, Not Data: Without profiling a working system under realistic load, any attempt at optimization is pure guesswork. Developers are notoriously bad at predicting where performance bottlenecks will occur. Effort is inevitably wasted on optimizing functions that account for a trivial fraction of the system’s runtime.
  • It Addresses the Wrong Problem: Most severe performance issues are architectural sins: N+1 query patterns, chatty API calls, inefficient data structures, or a lack of caching. Focusing on algorithm-level tweaks while ignoring these macro-level issues is a futile exercise that yields negligible real-world impact.

What to do about it:

  1. Follow the Mantra: Make It Work, Make It Right, Then Make It Fast. First, write simple, clear code that correctly solves the problem. Second, refactor that code for good design and maintainability. Only as a final step, if and only if you have data proving a performance problem exists, should you optimize the specific, measured bottleneck.
  2. Institute a “Profile-First” Rule: Make it a team policy that no performance optimization is undertaken without supporting data. A pull request aiming to improve performance must be accompanied by profiling results or benchmarks that (a) demonstrate the problem’s existence and significance, and (b) prove the proposed solution effectively resolves it.
  3. Teach Macro-Optimization Thinking: Train your team to look for performance gains at the architectural level. Encourage questions like: “Can we solve this with one database query instead of 50?” or “Should this synchronous process be an asynchronous job?” These high-level changes deliver orders of magnitude more impact than any local code tweak.

Kernighan’s Law

Everyone knows that debugging is twice as hard as writing a program in the first place

This law is a direct warning against intellectual vanity in programming. It establishes a fundamental relationship: the peak of your cleverness when writing code must be significantly higher than the peak of your cleverness required to debug it. By writing code at the absolute limit of your ability, you leave yourself no cognitive surplus to understand it later when it inevitably breaks.

Why it happens:

  • The Loss of Context: When you write a piece of “clever” code, you are immersed in the problem’s context. Six months later, that context is gone. The intricate mental model you built has vanished, and you are left trying to decipher what your past, “smarter” self was thinking.
  • Code is Read More Than It Is Written: A single line of code may be written once, but it will be read dozens or hundreds of times by teammates, future maintainers, and yourself. Optimizing for write-time cleverness at the expense of long-term readability creates a massive net loss in productivity for the entire team.
  • Cleverness Obscures Intent: The primary purpose of code is not just to instruct the computer, but to communicate its intent to other humans. “Clever” code, with its dense syntax and non-obvious tricks, prioritizes mechanical efficiency over human communication, making it brittle and dangerous to modify.

What to do about it:

  1. Write “Boring” Code: Strive to write code that is simple, obvious, and predictable. Use well-known patterns and clear, descriptive variable names. Your goal should be that a new team member can understand the code’s purpose without needing a detailed explanation.
  2. Make Clarity a Primary Feature: Treat readability as a non-negotiable requirement, on par with correctness and performance. During code reviews, explicitly ask the question: “Is this code clear?” If the answer is no, it should be rejected and simplified, no matter how technically correct or “clever” it is.
  3. Favor Verbosity Over Terseness: A slightly longer, more explicit block of code is almost always superior to a dense one-liner that accomplishes the same thing. Use intermediate variables to give names to complex sub-expressions. Use simple if/else blocks instead of nested ternary operators. The extra lines are a small price to pay for a massive gain in clarity.
  4. Comment the “Why,” Not the “What”: If you must write a complex piece of code to solve a genuinely hard problem, do not use comments to explain what the code is doing—the code itself should do that. Instead, use comments to explain why the complexity is necessary. Document the business constraints or performance requirements that forced you to abandon a simpler solution.

Tesler’s Law: Every application has an inherent amount of complexity that cannot be removed or hidden

Also known as the Law of Conservation of Complexity, this principle states that for any given process, there is a core of complexity that cannot be reduced. It can only be moved. The central design question is not if this complexity will be handled, but who will handle it. Will you absorb the complexity in your backend logic, making the user’s experience simple? Or will you push it to the user, forcing them to make choices and configure options? The complexity must reside somewhere.

Why it happens:

  • Business Logic is Inherently Complex: Real-world processes have rules, edge cases, and conditional logic. A feature like booking a flight involves dates, time zones, seat availability, and pricing rules. This complexity is intrinsic to the problem domain; it cannot be wished away.
  • The “Zero-Sum” Nature of System Design: Simplifying one part of a system often complicates another. A beautifully simple, one-click UI for ordering a product requires a sophisticated backend system that can handle inventory, payments, shipping logic, and user defaults. Conversely, a simple backend might require a complex UI with many forms and dropdowns.
  • Abstractions Shift, They Don’t Eliminate: Every abstraction (APIs, frameworks, libraries) is a tool for managing complexity. It doesn’t destroy the complexity; it encapsulates it. The underlying difficulty is still present and can leak through, especially during debugging or when handling failures.

What to do about it:

  1. Consciously Allocate Complexity: Your primary job as a designer or developer is to decide where the complexity should live. For most consumer applications, the goal is to shoulder the burden yourself. You write more complex code so the user has a simpler experience. For expert tools (like a developer’s IDE or a data scientist’s notebook), you might deliberately expose some complexity to the user in exchange for greater power and flexibility.
  2. Identify the Irreducible Minimum: Before building, explicitly ask, “What is the absolute minimum set of inputs and decisions required to accomplish this task?” This helps define the core complexity you are forced to manage. Anything beyond that is accidental complexity you have introduced yourself and should be challenged.
  3. Audit Your Abstractions: When you adopt a new tool that promises simplicity, always ask, “Where did the complexity go?” Did it move into a complex configuration file? Is it hidden in a way that will make debugging impossible? Understanding this trade-off is critical. There is no magic; every simplified interface has a corresponding complexity cost somewhere else in the system.

Spolsky’s Law of Leaky Abstractions: All non-trivial abstractions, to some degree, are leaky

Abstractions are the bedrock of modern software development. They allow us to work with complex systems—like databases, networks, or operating systems—without needing to understand every detail of their internal implementation. However, this law asserts that no abstraction is perfect. Inevitably, details from the underlying layer “leak” through, forcing the developer to confront the very complexity the abstraction was meant to hide.

Why it happens:

  • Performance is a Stubborn Detail: An abstraction can hide how something is done, but it can’t hide how long it takes. An Object-Relational Mapper (ORM) abstracts away SQL, but when it generates an N+1 query that brings the database to its knees, the underlying reality of database performance has leaked through, demanding knowledge of SQL to fix.
  • Failure Modes are Unavoidable: An abstraction cannot perfectly insulate you from the failure modes of the system it represents. An API client library might provide a simple getData() method, but when the network fails, it leaks TCP/IP errors. The developer must then understand the nature of network failures to handle them robustly.
  • Semantic Mismatches: Sometimes the simplified model offered by the abstraction doesn’t perfectly match the nuanced behavior of the underlying system. A file system abstraction might try to provide a uniform interface, but the subtle differences in file locking between Windows and Linux will eventually leak through and cause bugs.

What to do about it:

  1. Learn One Layer Down: This is the most critical takeaway. Do not be a “framework-only” developer. If your job involves using React, have a functional understanding of the DOM. If you use an ORM, learn to write and analyze SQL. If you use Docker and Kubernetes, understand the basics of networking and Linux processes. You don’t need to be an expert in the lower layer, but you must be competent enough to debug when the inevitable leaks occur.
  2. Choose Your Abstractions Deliberately: Favor abstractions that are transparent about their trade-offs and make it easy to access the underlying layer when needed. An ORM that allows you to drop down to raw SQL is less dangerous than one that completely forbids it. Be skeptical of abstractions that promise too much magic.
  3. Debug with the Leak in Mind: When faced with a bizarre or intractable bug, your first question should be, “Is the abstraction leaking?” Use tools to inspect the output of the abstraction. Look at the generated SQL, the raw network requests, or the actual system calls being made. The root cause of the problem is often found in the leaked details.
Share

Series

Software Engineering Laws

A practical sequence on the recurring laws and constraints that shape engineering work, from coding and architecture to testing and performance.

Open series page

Explore further

Keep going with a few related posts, then branch into the topic hubs and collections around the same ideas.

Continue with these