How To Spot A Fake Candidate Before You Hire One
Ivan appeared well-qualified on paper. The role, a senior engineering position at the voice authentication platform Pindrop Security, seemed like a perfect match for the Russian coder’s skillset.
But when Pindrop recruiters called Ivan for a video interview, it immediately became clear something was amiss. His facial expressions lagged behind his words, the way a poorly dubbed film falls out of synch. When asked an unexpected technical question, Ivan paused for an unnaturally long time—or, put another way, the exact right amount of time to process a response before playing it back.
Fortunately for Pindrop, fraud detection is their core business. Ivan was quickly identified for what he was—a scammer using deepfake software in order to get hired.
“With over 25 years of experience in Human Resources, I’ve encountered countless challenges in talent acquisition—but nothing quite like the emerging threat we face today,” Pindrop’s Chief People Officer Christine Aldrich wrote in a blog post detailing the incident. “The rise of fraudulent profiles and deepfake candidates is reshaping the hiring landscape in ways many never anticipated.”
AI has changed hiring in many useful ways, both for employers and job candidates. But deepfake technology has also given rise to a concerning new level of fraud—and it’s moving faster than most companies are prepared to handle. Here’s my advice for how leaders can get ahead of these unnerving levels of fakery and hire with confidence.
Unfortunately, Pindrop’s experience wasn’t unique. According to a 2025 report by GetReal Security, 41 percent of IT and security leaders say their company has hired and onboarded a fraudulent candidate. Sixty-two percent of hiring professionals believe job seekers are now better at faking identities with AI than recruiters are at detecting them. Gartner projects that by 2028, one in four candidate profiles worldwide will be fake.
You may think that a tech company would be harder to fool, but often, they’re the ones being targeted. KnowBe4—a cybersecurity firm whose entire business is teaching organizations to recognize deception— conducted four video interviews, ran background checks and verified references before hiring a North Korean operative as a principal software engineer. The moment their workstation arrived, they began loading it with malware. KnowBe4’s security tools caught it within 25 minutes, but unfortunately, most companies don't have those tools.
The Department of Justice has taken action against IT fraud perpetrated by North Korea, specifically, in June, enacting a sweeping operation that took down “laptop farms” in 16 states. Two Americans allegedly involved were also indicted, one of whom was arrested by the FBI.
But the issue runs deeper than just one or two operatives—it’s become an entire ecosystem of fraud. And as Pindrop, KnowBe4 and plenty of others have learned, it can be incredibly difficult to identify.
When Hiring Becomes A Vulnerability
When remote work became the norm in pandemic days of 2020, it looked like a boon for organizations: lower overhead, access to global talent and faster hiring processes. Before the pandemic, only around four percent of U.S. jobs were remote. Today, remote and hybrid roles account for more than a third of all new job postings, according to Robert Half's analysis of over two million U.S. positions.
Plenty of organizations continue to embrace remote work, but the shift has created an opening. Workers can be recruited, hired and onboarded without ever so much as an in-person handshake. At the same time, deepfakes are becoming more convincing than ever. Resumes, references and identities can all be falsified with relative ease; even real-time interviews are convincingly fakeable.
“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Pindrop’s CEO and co-founder Vijay Balasubramaniyan told CNBC. The problem is that the traditional hiring process was built for a world where faking an identity was difficult and expensive. That’s no longer the case—and the companies that haven’t rethought their processes are the most exposed.
How Leaders Can Weed Out Bad Actors
The good news is that it is possible to reduce exposure to these sorts of scams.
Ironically, one of the best protective measures you can take is one that AI actually enables: spending more time interviewing candidates in person. A great thing about today’s AI tools is that they’re able to take busywork off the plates of overwhelmed hiring managers, leaving them more time to carefully consider prospective candidates.
When in-person interviews aren’t practical, there are ways of filtering fake candidates online, too. During video interviews, require cameras to be on with no background filters or blurring. Ask unscripted, casual questions that deviate from the resume. Deepfake candidates and coached proxies tend to handle prepared questions smoothly, and falter when the conversation goes somewhere unexpected.
Finally, restrict new hire system access until their identity is fully confirmed. For KnowBe4, their limited onboarding permissions were ultimately what contained the damage. The operative had the job—and the laptop—but was unable to do truly meaningful harm because the access wasn't there yet.
Remote hiring isn’t going away, nor should it. But the implicit trust on which it was built has to be replaced with something more deliberate. Businesses need to be aware of these new threats to their security and respond proactively. You don’t want to be the next company to issue a cautionary blog post.
Loading article...