Amdahl’s Law
The speedup gained from running a program on a parallel computer is greatly limited by the fraction that can’t be parallelized
This law provides the mathematical justification for why simply throwing more hardware (cores, servers, threads) at a performance problem often yields disappointing results. It states that the maximum performance gain is capped by the sequential part of your process. If 10% of your application’s runtime is a single-threaded, serial task, then even with an infinite number of processors, you can never make your application more than 10x faster. That serial portion becomes the ultimate bottleneck.
Why it happens:
- Inherent Sequential Logic: Most processes have components that must execute in order. This can be initializing the system, reading a configuration file, acquiring a central lock, or aggregating final results from all parallel workers. These steps cannot be sped up by adding more resources because they can only be done by one worker at a time.
- The Law of Diminishing Returns: The first few processors you add provide a significant boost because they attack the large, parallelizable portion of the work. However, as more processors are added, the parallel portion’s runtime shrinks toward zero, leaving the fixed-duration serial part to dominate the total execution time. The return on each additional processor diminishes rapidly.
- Coordination Overhead: In practice, adding more parallel workers isn’t free. It introduces communication and synchronization overhead. The effort required to split the work, distribute it, and collect the results can itself become a new bottleneck, further limiting the theoretical gains predicted by the law.
What to do about it:
- Profile Before You Parallelize: Never optimize based on assumption. Before you invest in more hardware or a complex multi-threaded design, use a profiler to precisely measure your application. You must identify what percentage of the runtime is spent in serial execution versus parallelizable execution.
- Focus Obsessively on the Serial Fraction: The most dramatic performance improvements come not from making the parallel part faster, but from shrinking the serial part. Relentlessly attack the bottlenecks. Can that initial data load be made asynchronous? Can you break a coarse, global lock into multiple fine-grained locks? Can you redesign the algorithm to reduce the final, single-threaded aggregation step?
- Architect for Parallelism from the Start: When designing a new system for scale, consider Amdahl’s Law. Choose algorithms and data structures that are inherently parallelizable. Favor approaches like sharding or partitioning data to avoid centralized chokepoints. A system designed with a small serial fraction from the beginning will scale far more effectively than one that has parallelism bolted on as an afterthought.
Moore’s Law
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year
Wirth’s Law
Software gets slower faster than hardware gets faster
These two laws exist in a state of permanent tension. Moore’s Law observes that the number of transistors on a chip—a proxy for raw hardware capability—doubles approximately every two years. Wirth’s Law counters that software gets slower more rapidly than hardware gets faster. The result is the frustrating reality that despite exponential gains in hardware power, modern applications often feel just as sluggish as their predecessors. Moore’s Law provides a “performance dividend,” and Wirth’s Law explains how we immediately squander it.
Why it happens (Why Wirth’s Law usually wins):
- Layers of Abstraction: Modern software is built on mountains of abstraction—operating systems, virtual machines, containers, frameworks, and countless libraries. Each layer adds safety and boosts developer productivity but imposes a performance tax. Since developer time is typically more expensive than CPU cycles, we consistently make the economic choice to trade machine efficiency for human efficiency.
- Rising User Expectations & Feature Bloat: Software is never “done.” The performance gains from new hardware are immediately consumed by demands for more complex features: higher-resolution graphics, real-time collaboration, more powerful analytics, and richer UIs. The baseline of what is considered an acceptable application is constantly rising, eating every available hardware cycle.
- Developer Complacency: Fast hardware makes developers lazy. When there is no immediate performance pain on a developer’s high-end machine, there is little incentive to write efficient code. Inefficient algorithms and bloated dependencies are accepted because the hardware is powerful enough to brute-force through them, creating a cycle of inefficiency.
What to do about it:
- Stop Expecting a Free Lunch: The era of relying on single-core speed improvements from Moore’s Law is over. Gains now come primarily from multi-core processors, which software cannot automatically leverage without careful, parallel design (see Amdahl’s Law). You must actively manage performance; the hardware will no longer save you by default.
- Establish a Performance Budget: Treat performance as a feature. Define explicit, non-negotiable performance targets for your application (e.g., “API response time must be under 100ms,” or “page load must be under 2 seconds”). Integrate performance testing into your CI/CD pipeline to catch regressions automatically. This makes the cost of bloat visible and forces a deliberate conversation about trade-offs.
- Conduct Dependency Audits: Every new library or framework you add is a performance liability. Before adding a dependency, ask: “Is the productivity gain from this library worth its performance cost?” Regularly audit your project to identify and remove unused or overly costly dependencies.
- Optimize for the User, Not the Developer: While developer productivity is important, it cannot be the only consideration. Remind the team that the user is not running their high-end development machine. Test on lower-specced hardware and slower network connections to build empathy and reveal the true performance cost of your architectural decisions.