Skip to content
Leadership Garden Leadership
Garden

Software Engineering Laws - Risk & Security

5 min read
Series Software Engineering Laws Part 8 of 11
Software Engineering Laws - Risk & Security
Table of Contents

Murphy’s Law

If there is a wrong way to do something, then someone will do it

In software engineering, this is not a cynical joke; it’s the fundamental principle of defensive design. It dictates that if there is an incorrect way for a user to interact with your UI, an invalid way for a developer to call your API, or a way for bad data to enter your system, it is not a matter of if but when it will happen. A robust system is one that anticipates and gracefully handles this inevitability.

Why it happens:

  • User Error and Exploration: End-users are not programmers. They will enter text into number fields, use the back button at the worst possible moment, and click buttons in an order you never anticipated. This isn’t malice; it’s a natural consequence of human interaction with a complex system.
  • Misunderstood Contracts: The developers consuming your API will misread the documentation, misunderstand a data type, or fail to handle an error case. Their system will send you malformed data or call your endpoints in unexpected sequences.
  • Hostile Intent: A subset of interactions will be deliberately malicious. Attackers will probe every input for security vulnerabilities like SQL injection or cross-site scripting, intentionally trying to trigger failure modes you didn’t foresee.

What to do about it:

  1. Practice Defensive Programming: Trust nothing. Treat every input from outside your immediate scope of control—whether from a user, another service, or a database—as potentially invalid or hostile until proven otherwise. Validate data formats, check for nulls, and sanitize all inputs before processing them.
  2. Design to Fail Safely: When an error inevitably occurs, ensure it does so gracefully. An invalid user action should result in a clear, helpful error message, not a cryptic stack trace or a crashed application. A failing downstream service should trigger a controlled fallback or circuit breaker, not a cascading failure that takes down your entire system.
  3. Make the Right Way the Easy Way (Poka-yoke): The best way to prevent errors is to make them impossible through design. In a UI, disable the “Submit” button until the form is valid. In an API, use strong typing and non-nullable fields to make it impossible to pass invalid data. Guide the user or developer toward the correct path so that they have to go out of their way to do the wrong thing.
  4. Assume Malice by Default: When designing security, assume every input is an attack. This shifts the mindset from “preventing accidental mistakes” to “defending against a determined adversary,” leading to far more resilient and secure systems.

Kerckhoffs’ Principle

A system should be secure even if everything about the system, except the key, is public knowledge

This is the foundational principle of modern cryptography and a direct refutation of “security through obscurity.” It asserts that the strength of a secure system must not depend on the secrecy of its design, its source code, or its algorithms. Instead, its security must rest solely on the secrecy of a single, small piece of information: the key. A system that is only secure because no one knows how it works is, by definition, insecure.

Why it happens (Why this principle is critical):

  • Secrets Don’t Stay Secret: System designs are inevitably exposed. Source code can be leaked by insiders, reverse-engineered by competitors, or discovered by attackers. Relying on the secrecy of your method is a fragile and temporary strategy.
  • Keys are Disposable; Designs Are Not: If your security depends on a secret algorithm and that algorithm is compromised, you must perform a costly and complex redesign of your entire system. If your security depends on a key and that key is compromised, you simply revoke the old key and issue a new one. This makes the system far more resilient and manageable.
  • Public Scrutiny Creates Strength: The world’s most secure cryptographic algorithms (like AES and RSA) are public knowledge. They are trusted precisely because they have been subjected to decades of intense scrutiny by thousands of experts trying to break them. A proprietary, secret algorithm has never faced this trial by fire and is likely riddled with flaws.

What to do about it:

  1. Assume the Attacker Has the Source Code: This should be your default mental model when designing security features. If an attacker had your complete codebase, would your system still be secure? If the answer is no, because you’ve hardcoded a secret or are relying on a hidden process, your design is flawed.
  2. Never Roll Your Own Crypto: Do not invent your own encryption or hashing algorithms. Use well-known, industry-standard, and publicly vetted libraries and protocols. The security of these systems has been proven through years of public analysis; the security of your custom solution has not.
  3. Isolate Secrets from Logic: Your application’s logic (the “how”) should be separate from its secrets (the “what”). The logic can be open and reviewed. The secrets—API keys, database passwords, encryption keys—must be rigorously protected.
  4. Focus on Robust Key Management: Since the entire security of the system rests on the keys, their management is your most critical task. Use dedicated secret management tools (like Vault, AWS KMS, or Azure Key Vault), enforce strict and granular access controls, automate key rotation, and maintain detailed audit logs for all access.
Share

Series

Software Engineering Laws

A practical sequence on the recurring laws and constraints that shape engineering work, from coding and architecture to testing and performance.

Open series page

Explore further

Keep going with a few related posts, then branch into the topic hubs and collections around the same ideas.

Continue with these