AI Hiring Gone Wrong

- 26 June 2025 - 14 mins read

Last week, I interviewed for a Senior Platform Engineer role at a major European healthtech company with offices in Europe and South America.

Today, I got rejected. The reasons are so absurd, I’m still shaking my head.

Their feedback? Among other things, they claimed I didn’t know KEDA supports scale-to-zero.

Except… I did.

I described a production-grade use case with KEDA and SQS, which they acknowledged as “highly applicable” to scale-to-zero scenarios. Oh, and apparently I failed to “articulate core technical concepts in a focused and structured way”, despite detailing real-world systems I’ve built for Expedia and other world-class companies.

Let’s unpack this nonsense…

The Feedback Fiasco

Here’s the gem from their rejection email: “We acknowledge that you described a use case involving SQS, which indeed aligns well with scenarios where scale-to-zero is highly applicable. Your experience managing such patterns across environments is clear and was noted by us”. Sounds good, right? Now here’s the kicker: “You didn’t know that in KEDA we can scale to zero.

Read that again. They praised my SQS use case as a perfect example of scale-to-zero in KEDA, then claimed I didn’t know KEDA supports it.

That’s not feedback, but a Schrödinger’s paradox.

And to top it off, they added: “While we value practical experience, it’s equally important for us to assess a candidate’s ability to articulate core technical concepts in a focused and structured way. Taking all of this into account… we will not be moving forward.

So, I demonstrated scale-to-zero in a real-world system, they agreed it was spot-on, but I got dinged for not sticking to just saying the magic words “Yes, KEDA scales to zero” like a trained parrot.

If this isn’t absurd, I don’t know what else is.

Scale-to-Zero? What About Scale-to-Infinite?

Let’s talk about what scale-to-zero actually looks like.

Spoiler alert: Scaling to zero is the easiest part. For Expedia, I led an ML platform for the world’s largest destination gallery, processing petabytes of images and videos. Not a large gallery. The largest.

This wasn’t some toy project. This platform powered dynamic media cleaning, transformation, and ML-tagging for a global audience, under strict time constraints.

The system had to scale to zero when idle, saving costs while sitting dormant, ready to spin up instantly for the next burst. Scale-to-Infinite is the hard part, ensuring at the same time uptime and no broken pages or incomplete galleries. A single hiccup could’ve tanked UX and revenue across Expedia’s global platform. Millions were on the line, and I delivered.

That’s scale-to-zero. Not a buzzword I memorized from KEDA’s docs, but a real, production-grade, fault-tolerant system I engineered.

And they acknowledged it was “highly applicable”! Yet, somehow, I “didn’t know” the feature existed.

Make it make sense.

The Karpenter Nonsense

Don’t leave yet, it gets better! They also claimed I “didn’t know exactly how Karpenter works and connects to AWS”.

Except… I did.

I explained how Karpenter provisions EC2 instances directly for Kubernetes pods, bypassing Auto Scaling Groups (unlike EKS Autoscaler), assigning instance types dynamically, and scaling faster for pod-driven workloads. I even broke down when to use Karpenter vs. EKS Autoscaler based on latency and provisioning needs.

Oh, and this wasn’t just theoretical. I’ve managed Kubernetes clusters with 10,000 CPUs, deployed Karpenter in production, and optimized it for cost and performance. Yet, their feedback suggests I blanked on the basics. Did they want me to recite Karpenter’s GitHub README? Or maybe they just didn’t hear me through their AI filter.

More Absurdity Beyond KEDA and Karpenter

Keep your seatbelt fastened, the feedback didn’t stop at KEDA! They also claimed I had no experience with Argo Rollouts, even though I described canary deployment strategies in production during the interview.

And… more! The feedback takes another absurd turn with database knowledge. They called my knowledge “foundational”, despite me explaining horizontal partitioning, vertical vs horizontal scaling trade-offs, indexing, and materialized views. I even went further, discussing when a database alone isn’t enough and starts making sense to use tools like Elasticsearch with its sharding for scalability, balancing both solutions for optimal performance. That’s not “foundational”: that’s advanced system design.

I’ve optimized queries to eliminate full table scans, used partitioning to scale data-intensive systems, and integrated Elasticsearch for search-heavy workloads in production. For example, for Expedia, I worked on systems handling petabytes of data, where these techniques were critical to performance and cost. But apparently, my knowledge is merely “foundational” because I didn’t recite a textbook definition fast enough.

Come on.

And HashiCorp Vault? Apparently, I hadn’t touched it recently, even though I detailed its UI, my experience with using SSO for large teams using federated logins, the Terraform provider and the integration as a Kubernetes operator for native secrets. What did they want, a live recital of Vault’s changelog in the last decade?

This isn’t about my skills, it’s about a process that values buzzword bingo over real engineering. I’ve managed huge Kubernetes clusters, deployed Karpenter and EKS Autoscaler with clear latency trade-offs, and I was even chosen to lead Huawei’s cloud enterprise migrations for complex, mission-critical systems. The CTO of Huawei Cloud picked me to architect cloud-native solutions for massive clients. But sure, tell me I don’t know scale-to-zero because I didn’t recite KEDA’s docs verbatim.

The AI Filter Clown Show

Here’s my theory: AI screwed me over. The interview was a video call with a Polish team, and I’m a Spanish engineer who’s lived in Australia for 11 years. English was our common language, but accents and occasional crosstalk made it messy. As I found out later, they used live AI transcription, then fed it into an LLM with prompts like “Check if the candidate knows KEDA’s features”, and got garbage output from an already-garbled transcript.

Why? Because the feedback reads just like an LLM hallucinating. It praises my SQS use case as a perfect scale-to-zero example, then claims I didn’t know the feature, confidently contradicting itself in the same breath. An AI fixated on keywords like “scale-to-zero” might have missed the nuance of my example, especially if the transcription botched my words. And the vague “not focused and structured” critique? Classic LLM-speak when it can’t parse context.

The Real Cost of AI Hiring

This rejection it’s a systemic failure. If you’re using AI to filter senior engineers, you’re not hiring talent; you’re casting for a trivia show. AI can’t grasp the nuance of debugging a pipeline at 3am, architecting systems with millions in revenue on the line, or making trade-offs that balance cost, scale, and reliability.

These type of processes punishes engineers who:

  • Prioritize real-world impact over textbook answers.
  • Think in systems and trade-offs, not product names.
  • Bring battle-tested experience that doesn’t fit an AI’s rigid template.

If your LLM misreads a transcript or dings me for not saying “scale-to-zero” enough, you’re not just rejecting me, you’re rejecting the engineers who build the systems you claim to need.

Wake Up, Hiring Teams

To companies leaning on AI like this: You’re blowing it.

Senior engineers don’t win by memorizing tool docs or product names. Senior engineers win by solving hard problems and delivering results. I built scale-to-zero for Expedia’s revenue-critical platform, optimized Karpenter for huge clusters, and designed data systems with partitioning and Elasticsearch for scale. If you are looking for seniors who parrot features of tool names for the sake of it, you’re not looking for seniors, and you’re not looking to make the business efficient.

Imagine rejecting Linus Torvalds himself because he once said “Debian is too difficult to install”, claiming he’s “unfamiliar with Linux”. That’s almost the level of absurdity here.

If your AI filter misses my expertise because I didn’t say the right incantation (or worse, if your inexperienced interviewers can’t spot real talent without external tools) you’re not building a team, you’re building a mess.


Share: Link copied to clipboard

Tags:

Previous: A 1954 Time Capsule for Building Thai Homes
Next: Platform Engineering es más que gestionar servidores

Where: Home > Technical > AI Hiring Gone Wrong