LeetCode Style Interviews Are Over. It’s Not “Can You Code?” but “How Do You Build?

LeetCode Style Interviews Are Over. It’s Not \

LeetCode Style Interviews Are Over. It’s Not “Can You Code?” but “How Do You Build?

Unbundling of the Technical Interview in an AI era.

The ground is shifting beneath the tech industry’s most dreaded ritual: the technical interview.

For years, the path to a software engineering job was paved with LeetCode problems, an array of abstract puzzles that often bore little resemblance to the actual work.

The rise of AI is forcing long overdue changes in big tech. Many tech-forward companies & startups have already adopted interview formats that mirror real world tasks. Now, large companies are being pushed to scrap the old playbook and embrace practical skills over memorisation of algorithms.

That said, engineers and data scientists still need a solid grasp of their discipline’s fundamentals. What’s changing is how that knowledge is assessed. It’s less about recalling syntax and more about knowing when to apply the right algorithm or technique.

A key skill companies now need to evaluate is a candidate’s ability to demonstrate that they are a T-shaped engineer/scientist. Someone with deep expertise in one area (the vertical bar) and broad, functional knowledge across others (the horizontal bar). This “generalising specialist” can collaborate effectively across their discipline, a vital trait for an AI Engineer.

From Recitation to Collaboration

The large shift is already happening in the coding round. The old “no AI tools allowed” rule is fading fast, replaced by an expectation that candidates will leverage AI, just as they would on the job. The focus is not on raw implementation from memory but on your ability to direct, validate, and refine AI-generated code.

Of course, there’s still value in knowing a candidate has strong language fundamentals and a solid grasp of core concepts.

Personally, I prefer a pair programming approach:

  • Provide the candidate with a repo ahead of time, closely related to the job.
  • Give them some time to familiarise themselves, choose their tools, and get it running locally.
  • Then, conduct a pair programming session.

Raw skills can still be validated by splitting the session into two parts, turning off AI assistance for one portion and asking the candidate to write some code manually in a language of their choice. This appears to be how Anthropric still assesses core skills. Anthropic Candidate AI Guidance

Alternatively, and this is my preferred method, it’s not about “cheating”; it’s about demonstrating judgment. Strong candidates guide the model with precise prompts, catch subtle bugs, and articulate tradeoffs. They stay in the driver’s seat.

Shopify’s Head of Engineering captures this new philosophy perfectly. They’ve fully embraced AI in their interview process:

I love it. Because what happens now is that the AI will sometimes generate pure garbage… If they don’t use a copilot, they usually get creamed by someone who uses one… When the candidate does use a copilot, I love seeing the generated code. I ask them: what do you think? Is this good code?… I’ve seen engineers… try to prompt to fix it… I want you to use it, 90–95%. I want you to be able to go in and look at the code and say, oh yeah, there’s a line that’s wrong.

The interview has become a dialogue between the candidate, the interviewer, and the AI. Success is measured by the quality of that dialogue.

The technical test, is now mirroring the job.

As AI handles more boilerplate, companies are realising that abstract algorithm challenges are a poor signal. Instead, they’re designing assessments that mirror the actual work their teams do:

  • Debugging is the new Data Structures and Algorithms. Some firms now present candidates with a buggy, AI generated application and ask them to fix it. This is a far richer test of fundamentals, revealing a candidate’s ability to trace logic, diagnose errors, and apply knowledge in a practical context. Companies can add incorrect or less efficient Data Structures or Algorithms to the code bases and see if the candidate is aware or can improve and explain their reasoning.
  • System design incorporates AI primitives. The classic system design interview remains, but the building blocks have evolved. Instead of just databases or caching layers, the blueprint now includes LLM calls, vector databases, 3rd party blackbox AI APIs and building blocks to safely evaluate AI outputs. The core skill reasoning about tradeoffs, scale, and latency is the same, but the context is grounded in modern product development often with AI features in the development life cycle.
  • Take-homes are truly open The pretense of working in a vacuum is gone. Take home projects now explicitly expect candidates to use AI. Github copilot offers a free personal version and can be used at the very least if the candidate doesn’t have a subscription. The follow up discussion then shifts from what you built to how you built it, probing your process and your collaboration with tools.

The Most Important Trait: Build

The best way to prepare for modern technical interviews is simple: build things with AI.

  • Learn to write effective prompts.
  • Get comfortable debugging AI-generated code.
  • “Tab, Tab, Tab,” as Cursor puts it, or “prompt, prompt, prompt.”

Most importantly, don’t be afraid to dive into the code to correct AI mistakes, improve its output or start a fresh. Be ready to talk about your process, explain your thinking, understand context management and demonstrate that you’re not a passive consumer of AI but an active, intelligent collaborator.

The Divide (and the Inevitable Future)

This transition isn’t happening uniformly. A clear split has emerged:

  • Large, established firms are moving cautiously. With their scale and institutional inertia, many are still likely to Leetcode you as a filter layer to manage their applications load. Via HackerRank style platforms or assessments, but honestly I think we can do better than this and also these platforms are moving in the direction of validating real work, Hackerrank a well known platform also embracing the change. HakerRank AI coding assessments
  • Startups and tech-forward companies are experimenting aggressively. They’re pushing for collaborative coding in tools like Cursor or Claude code, designing AI centric system design prompts, and replacing abstract puzzles with practical debugging sessions.

The pressure is mounting to evaluate candidates on skills relevant to a world saturated with AI. The technical interview now asks one fundamental question: Not “Can you code?” but “How do you build?”