This is not about the platform, but the principle. LeetCode is still super useful to practice and deepen your algorithm and data structure knowledge! Okay, let’s talk about the elephant in the room, dressed in a hoodie, frantically whiteboarding a binary tree inversion. LeetCode interviews. I swear, I’ve sat through (and unfortunately, given) more of these than I care to admit. And now, with AI casually acing problems that make seasoned engineers sweat? It just hammers home the point: this whole ritual has become fundamentally absurd.
It’s not just a little broken. It’s leaning-tower-of-Pisa broken, and the ground is getting softer thanks to our new AI overlords-in-training.
The Great LeetCode Hamster Wheel
Remember spending, what, 100, 200 hours grinding LeetCode problems? I sure do. Nights and weekends sacrificed to the gods of Big O notation, praying you’d recognize the pattern for that one obscure dynamic programming problem the interviewer pulls out. You memorize dozens, hundreds of patterns, hoping the two you get asked in a 45-minute pressure cooker match something you crammed.
The argument goes: “Hey, if it potentially doubles your salary, 200 hours is a killer ROI!” And sure, on paper, maybe. But that logic conveniently ignores the fact that acing a contrived puzzle tells you almost nothing about whether someone can actually build software people want to use. Or worse, whether they can keep the ridiculously high-paying job they just landed. I’ve seen brilliant LeetCode ninjas arrive on the job and struggle to understand basic application architecture or collaborate effectively. The ROI calculation suddenly looks a lot less rosy then, doesn’t it?
The LeetCode Arms Race Is Getting Worse
As Chungin Lee demonstrated, LeetCode interviews have become even more pointless in the AI era. These problems are now trivially solvable by AI systems, making them perhaps the least valuable skill to test for in human engineers.
I’ve watched countless colleagues spend 100–200 hours preparing for FAANG interviews. That’s five full workweeks grinding through hundreds of algorithmic problems for an interview where they might face just one or two challenges.
According to recent research, the technical bar for interviews has risen “approximately one standard deviation higher” in 2025, with LeetCode “hard” problems now becoming the norm at places like Google. As one senior engineer put it: “I used to think that LeetCode ‘hard’ problems were never asked at Google. Now they seem to have become the norm.”
You Won’t Use These Skills in Real Life
Unless you work on specialized teams building low-level algorithms, you simply won’t use most of these skills outside of interviews.
In my 15+ years building software products, I’ve needed to implement a binary search from scratch exactly zero times. Why? Because every modern programming language already has battle-tested implementations of these algorithms in their standard libraries.
What LeetCode interviews actually test is:
- Your willingness to endure pointless hazing rituals
- Your privilege to have free time for studying
- Your pattern recognition skills for a very specific type of problem
- Your capacity to perform under artificial pressure
None of these correlate strongly with on-the-job performance for most software roles. Worse, this focus on algorithm puzzles represents a massive opportunity cost – the time you spend watching a candidate sweat over “Maximal Square” is time you’re not assessing their ability to read and understand code, communicate trade-offs, or debug realistic issues.
Skills for the Interview, Not the Job
Let’s be brutally honest. Unless you’re one of the lucky few working on hyper-optimized search algorithms or low-level graphics rendering, how often are you really implementing a custom K-Means clustering algorithm or manually balancing a red-black tree in your day job?
In my couple of decades slinging code and leading teams, the answer is basically never. When I need a sophisticated algorithm, I use a battle-tested library written by people far smarter and more specialized than me (or now, maybe an AI suggests one). The only times I’ve really dusted off those deep algorithm skills were:
- For fun, on a rainy Sunday because sometimes puzzles are genuinely engaging without the Sword of Damocles (a job offer) hanging over your head.
- Cramming for the next soul-crushing interview cycle.
It’s a skill set largely divorced from the reality of modern application development. We incentivize learning it purely to jump through an artificial hoop.
And here’s the kicker: your interview time is finite. Precious, even. You’ve got maybe 45-60 minutes per interviewer slot. If you spend 30-40 minutes watching a candidate sweat over “Maximal Square” or “Merge k Sorted Lists,” just think about what you aren’t learning about them.
In that same half hour, you could have been assessing:
- How they approach breaking down a small, realistic feature request.
- Their ability to read, understand, and critique a piece of existing (maybe slightly flawed) code.
- How they communicate trade-offs when discussing different technical approaches.
- Their thought process for debugging a non-trivial issue.
- Whether they can write code that isn’t just correct for the puzzle, but also maintainable and understandable by other humans.
- Their grasp of testing principles.
- How they handle ambiguity or ask clarifying questions.
You’re actively choosing not to gather signals on these critical, day-to-day engineering skills. Instead, you’re using that irreplaceable time slot to see if they memorized the trick for finding the longest palindrome. Is that really the best use of everyone’s time? It feels like using a Formula 1 car to deliver a single pizza across the street – impressive in one very specific, irrelevant way, and wildly inefficient for the actual task at hand.
This isn’t just about LeetCode being potentially irrelevant; it’s about the opportunity cost. By focusing on these abstract puzzles, we are actively neglecting to assess the very skills that usually determine success on the job.
So, What Are We Actually Testing?
If LeetCode isn’t testing job-relevant coding skills for most roles, what is it testing?
The charitable interpretation is that it tests raw problem-solving ability and CS fundamentals. Okay, maybe partially. But under extreme time pressure and anxiety? It mostly tests pattern recognition derived from hours of dedicated practice. It signals preparation and, maybe, a certain kind of grit – the willingness to endure something unpleasant and difficult to achieve a goal.
Is persistence valuable? Absolutely. Is this the best, or only, way to gauge it? Hell no. We could probably get a similar signal by asking candidates to assemble complex IKEA furniture blindfolded, but we don’t (though maybe we should?).
The less charitable view? It’s intellectual hazing. A shibboleth. A way for interviewers (often unconsciously) to feel smart by asking questions they themselves might struggle with if they hadn’t just reviewed the solution. It heavily favors people with the free time and resources to grind, potentially excluding brilliant candidates who have other life commitments or simply don’t test well under “gotcha” pressure (hello, neurodivergent colleagues).
Enter the Algorithm-Solving Robots
And now, AI. Models like ChatGPT and dedicated coding assistants can often spit out correct, optimized LeetCode solutions faster than a human can type. If the task is so easily automated, why is human proficiency at it the gold standard for entry? It’s like testing cashiers on their ability to do long division by hand in the age of calculators.
This should be a wake-up call. The least valuable work a human engineer does is the stuff easily replicated by a machine. Our value lies higher up the stack: in understanding context, dealing with ambiguity, designing robust systems, collaborating with squishy humans, making trade-offs, and turning fuzzy business needs into working software. Interviews should focus there.
The Missing Middle: Where Most of Us Actually Live
Software development isn’t a binary choice between writing arcane algorithms and drawing high-level system diagrams on a whiteboard. There’s a vast, crucial middle ground: application development.
This is where most engineers spend their careers. It involves translating requirements, choosing frameworks, writing maintainable code, integrating APIs, setting up CI/CD, debugging tricky concurrency issues, managing state, and a thousand other things that LeetCode barely touches.
Yet, look at a typical FAANG-style interview loop: maybe two algorithm rounds, one system design round. Where’s the part that actually tests if you can, you know, build the application? The assumption seems to be that if you can invert a binary tree and sketch a simplified Twitter architecture, you must be great at everything in between.
That assumption is deeply, demonstrably flawed. It’s magical thinking.
What Algorithm Interviews Actually Signal
Let me be crystal clear: algorithm interviews signal exactly one thing:
You practiced algorithmic challenges enough to recognize their patterns and solve them under pressure.
That’s it. They don’t signal general development skills, problem-solving abilities in ambiguous situations, or your capacity to turn business requirements into working software. They signal pattern recognition and preparation.
In my experience running engineering teams, the strongest correlation with successful hires has nothing to do with their ability to solve algorithm puzzles. It’s their track record of shipping real products, working well with others, and learning continuously.
Okay, But What’s The Alternative?
Look, I don’t have all the answers, and anyone who claims they do is probably selling something. But we can do better than abstract puzzles. The core idea isn’t revolutionary: test skills that are actually relevant to the job.
Instead of LeetCode mediums, consider interview formats that give signals on:
- Requirement to Code: Give a small, realistic business problem. How does the candidate clarify requirements? How do they translate that into a basic code structure?
- Code Comprehension & Debugging: Provide a moderately complex (but not intentionally obfuscated) piece of existing code with a subtle bug. Can they read it, understand it, and track down the issue? This is 90% of the job sometimes.
- API Integration / Usage: Ask them to interact with a simple, fake API. Can they handle requests, responses, errors gracefully?
- Practical Design Choices: Discuss trade-offs. “We need a background task queue. Should we use Redis, RabbitMQ, or build something simple in-process? Why?” This reveals practical architectural thinking.
- Refactoring: Give them a slightly messy but functional piece of code. Ask them to improve its readability, maintainability, or perhaps performance, explaining their reasoning.
- Collaborative Problem Solving: Pair programming on a small, realistic feature. How do they communicate? How do they handle feedback or disagreements?
These aren’t perfect, and they require more thoughtful preparation from the interviewer. But they give much stronger signals about the skills required for day-to-day software development. Resources like CodingChallenges.fyi offer more realistic problems than typical algo puzzles.
So, Should I Just Burn My Algorithms Textbook?
Hold on, not so fast. Understanding Data Structures and Algorithms is important. It forms the foundation. Knowing Big O helps you spot inefficiencies not just in tight loops, but in system interactions too. Understanding tradeoffs between, say, a hash map and a list is crucial for writing performant code. You can’t build solid houses without knowing about foundations, load-bearing walls, and maybe not making the roof out of solid lead.
The problem isn’t learning DSA. The problem is using high-pressure, timed, abstract algorithm puzzles as the primary filter in interviews. But this doesn’t mean we throw the baby out with the bathwater and stop assessing foundational knowledge altogether. We absolutely should check if a candidate understands the tools they’re using.
We can, and should, probe this understanding within the context of more realistic interview exercises. For instance:
- During a code review or refactoring task: “I see you chose a List here to store these items. What are the performance characteristics if this collection grows to millions of entries, and we need to frequently check if an item exists? Would another data structure potentially be more efficient? Why?” This gets at complexity understanding without a contrived puzzle.
- While discussing their proposed solution to a feature: “Okay, you’re proposing this loop nested inside another loop to process the data. Can you talk me through the time complexity of that approach? Is it O(n), O(n^2), something else? How might that scale?” Again, analyzing their relevant code.
- Deeper dives on language specifics: “You’re using a Python dict (or a Java HashMap, etc.) extensively here. Can you briefly explain how that works under the hood? Why is lookup typically so fast? Are there any edge cases where performance might degrade?” This tests deeper knowledge relevant to the tools they’d actually use.
These kinds of questions, woven into a discussion about actual code or system design relevant to the job, give you valuable signals about a candidate’s foundational understanding. It tests if they can apply the concepts, not just regurgitate a memorized solution to “Merge k Sorted Lists” while their palms are sweating.
So, learn the fundamentals, absolutely. Understand complexity. But do it for your own growth, when you have the time and space to actually internalize it, not just cram patterns for an interview. And as interviewers, let’s test that understanding in ways that reflect how it’s actually used on the job.
The Bitter Pill: Sometimes You Still Gotta Dance
Now, the reality check. Am I saying you should just refuse all LeetCode interviews on principle? If you have that luxury, maybe. But for most people seeking jobs at companies that still use this process (and let’s face it, that’s a lot of them), you might just have to play the game.
If the job you want requires passing a LeetCode gauntlet, then you need to prepare for that gauntlet. It stinks, but rent doesn’t pay itself. Do what you need to do. (There are plenty of resources out there, the original text even lists some).
The Market Is Bifurcating, But Change Is Coming
The paradox of 2025’s tech market is that we’re witnessing a bifurcation not just in opportunities, but in interview practices as well.
For engineers in AI and ML, the market resembles the 2021 boom – multiple offers, aggressive compensation, and expedited processes. One Bay Area staff engineer specializing in AI infrastructure reportedly received an offer from Meta exceeding $1 million in total compensation. But for engineers in “core domains” like frontend, backend, and mobile, the picture looks drastically different.
Similarly, Big Tech companies are doubling down on traditional interview formats while smaller companies innovate with more practical assessments. As one FAANG head of recruiting admitted: “The inertia of these processes is enormous. These companies have built entire recruiting machines around their current processes, with years of calibration data.”
What we’re witnessing is an inversion of historical patterns: for the past decade, interview practices pioneered by Google trickled down to smaller companies. Now, innovation is bubbling up from more agile organizations, with Big Tech watching from behind.
Let’s Fix This Mess
My goal here isn’t just to rant (okay, maybe a little). It’s to push for change. As leaders, as interviewers, as engineers, we shape the hiring culture. I don’t use LeetCode-style questions in my interviews anymore unless the role specifically demands deep algorithmic expertise (which is rare).
If you’re designing an interview process, think critically about what skills you actually need for the role. Is it puzzle-solving under pressure, or is it building, debugging, and maintaining real-world software? Optimize your interview process to test for that.
Stop cargo-culting hiring practices just because Google or Meta does it. Let’s value the skills that actually matter, not just the ones that are easiest to quantify with a pass/fail on a coding puzzle. Cut it out.