When AI Interviews Fail

- 26 June 2025 - 10 mins read

More than a week ago I went through a technical interview for a “Senior Platform Engineer” role at a major European healthtech company known for its presence in Spain, Poland, and Latin America.

I got rejected today.

And the reasons given were, frankly, absurd.

According to their feedback, I had no experience with Argo Rollouts (which I’ve used and literally described how canary strategies work during the interview), didn’t know that KEDA supports scale-to-zero (which I explained with a real-world production use case of a scale-to-zero scenario), and had only “foundational” knowledge of database partitioning (despite giving examples of horizontal partitioning and discussing vertical vs horizontal scaling).

Oh, and apparently I hadn’t touched HashiCorp Vault in a while and didn’t show enough detail.

Except I did.

I explained how we retrieved secrets from Terraform and applications, how I used Vault’s UI, and even integrated it with federated logins and SSO for added security. What else do you want, a recitation of Vault’s changelog in the last decade?

At first, I was just annoyed. But then something clicked.

The AI Filter Theory

The interview was done via video call. Everyone had a strong accent (I’m Spanish, lived in Australia for 11 years), and the team was Polish. We were speaking in English, and I noticed the usual friction: people talking over each other, some awkward pauses, unclear phrases.

Then I remembered something: live AI transcription.

It’s common now to transcribe interviews and feed them into LLMs with prompts like:

  • “Evaluate this candidate’s technical strengths and weaknesses”
  • “Summarize knowledge gaps”
  • “Extract topics the candidate didn’t cover”

And if your transcription is already rubbish, your AI summary is literally garbage.

This explains a lot. Because the feedback reads exactly like the output of an LLM that parsed a flawed transcript and confidently misunderstood everything.

I Know Scale-to-Zero. I’ve Also Done Scale-to-Infinite.

Let me tell you what scale-to-zero really looks like in the real world.

I built an ML platform for Expedia to power dynamic media processing for the largest destination gallery in the world. Not a large one. The largest.

Right there, leading Expedia Global’s web innovation from our office in Brisbane.

We’re talking petabytes of images and videos that needed to be cleaned, transformed, and ML-tagged and categorized on demand. The system had to process all media Expedia owns again and again—predictably, reliably, and under strict time constraints.

And it had to sit idle (at zero) when not in use. That’s true scale-to-zero. Not theory. Not a marketing checkbox. Real-world, production-grade, fault-tolerant infrastructure.

If autoscaling didn’t work perfectly, you’d get broken pages, incomplete galleries, and a cascade of issues directly affecting revenue and UX across global platforms.

So yes, I know scale-to-zero. I’ve engineered it with millions of dollars riding on it.

Now let’s talk scale-to-infinite

That same system had to process massive bursts of demand from global traffic without predefined limits. That meant:

  • Dynamic provisioning across clusters
  • Self-healing retries
  • Carefully designed steps to make an efficient use of resources
  • Queue-driven workloads that adapted to spikes in near-real time

This is the kind of engineering that you don’t “study”. You build it. You live it. You debug it at 3am when the gallery pipeline eats itself. That’s what senior platform engineers do.

And here’s the thing:

Not long ago, the CTO of Huawei Cloud personally chose me to lead a new business unit focused on helping enterprise clients migrate from on-prem to the cloud. The role? To be in charge of deciding how to dockerize, how to scale, how to re-architect legacy systems for a cloud-native future.

Let me be clear: When Huawei is the one managing a customer’s migration, those customers are not small (and those applications are absolutely not simple either).

So when I get interview feedback like “Candidate didn’t know KEDA supports scale-to-zero”… I honestly have to laugh.

If you’re evaluating senior engineers based on whether they can recall a single bullet point from a specific tool’s docs (rather than whether they’ve built systems that live and die by those patterns) you’ve already lost the plot.

This Is How You Reject Senior Engineers?

Throughout the interview, I provided detailed answers with real examples. I explained trade-offs, deployment patterns, production challenges, security posture, recovery mechanisms, you name it.

And apparently the issue was that I didn’t rattle off a bullet-point list of features from memory fast enough.

Let’s get something straight:

  • I’ve managed K8s clusters with 10k CPUs.
  • I’ve used Argo Rollouts in production and even described canary strategies during the call.
  • I’ve deployed Karpenter and EKS Autoscaler, and explained when to use which based on latency and provisioning needs.
  • I’ve used Vault with SSO and federated auth, and automated secrets distribution across apps and IaC.

If your interview process misses or discards all that because of how I explained it or because your transcription/AI summarization missed the point, you’re not hiring engineers. You’re screening for actors in a roleplay.

The Human Cost of Lazy AI Usage

This isn’t just a miscommunication. It’s a systemic failure.

AI should assist in the hiring process, not become a proxy for judgment. And if you’re feeding flawed transcripts into LLMs and using the outputs as gospel, you’re not just introducing bias, you’re actively filtering out qualified people who don’t match your interview template perfectly.

It’s especially punishing for:

  • People who answer with context and nuance instead of textbook phrasing
  • Candidates who prioritize real-world experience over regurgitating buzzwords
  • Engineers who think in systems and trade-offs rather than feature checklists

And if that’s how you’re filtering senior engineers today, you’re not selecting for talent but rather for trivia.


Share: Link copied to clipboard

Tags:

Previous: A 1954 Time Capsule for Building Thai Homes
Next: Platform Engineering es más que gestionar servidores

Where: Home > Technical > When AI Interviews Fail