Hey HN!
We are a team of red-teamers who have been hacking into ML models for almost a decade. I say _almost_ because my wife says 8 years is not a decade.
Recently, we turned our attention to stopping AI cheating during interviews: https://blind-spots.ai/
Here’s how we did it:
When interviewing for summer Interns, I had a weird feeling that the candidates were cheating. There was one candidate in particular who would constantly look at the corner of the screen every time I'd ask him a question. Maybe it was my paranoia (because of all the interview cheating posts I was seeing on my social media) but I had a feeling that the person was cheating.
We looked at the cheating prevention/detection solutions on the market. Most of them there rely on heuristics (eye tracking, measuring speech inflections) or spyware (keystroke loggers). These things are super intrusive, not to mention, incredibly fragile. The chance of false positives is non-trivial. God forbid I become nervous during my interview and have to look around.
We wanted to take a different approach from current solutions. We relied on our experience hacking into ML models, specifically via adversarial examples. Here, we make special “invisible” pixel changes so that when the AI cheating tool screenshots the interview question, the pixels force the underlying model to refuse to answer, or even output an incorrect solution. For audio based cheating, we made small, targeted perturbations in the spectral domain that caused the AI assistant to mistranscribe the question entirely.
It took us a few weeks to implement the first prototype. However, that's when we ran into our first major hurdle. The pixels that could break one cheating tool, would not work against others. This was frustrating because we couldn't figure out why this was the case. In fact, we almost called it quits. However, after a few weeks of experiments, we found two cultiprits. (1) Different underlying LLMs: For example, Cluely likely uses Claude and InterviewCoder uses some variant of the GPT family. Each model requires different pixel change strategies. (2) System Prompts: The pixel changes are impacted by system prompts used by the cheating tool. Since each tool has a different variation of the system prompt, it requires different pixel change methods.
Our dream was to build a “one-size-fits-all” attack. It took months of iteration and hundreds of experiments to build something that worked against ALL cheating tools.
Along the way, we extended our method to defeat audio cheating. Here, an AI assistant listens to the interviewer and writes back answers on the hidden screen. Making those spectral changes in real time (milliseconds, not hours) was a technical nightmare, but we got there.
In short, after hundreds of experiments and a few months of stubborn engineering, we built a low-friction layer that breaks the “screenshot-and-ask” and audio-proxy workflows used by cheating tools without invading candidate privacy or relying on brittle behavior heuristics. We productized those defenses as BlindSpꙨts (https://blind-spots.ai) so teams can protect live interviews in real time, with minimal false positives and no keystroke or camera spyware. If you run technical interviews, ask us for a demo!
Cool stuff. It should help level the playing field in interviews
This will be a game changer.
Hey HN! We are a team of red-teamers who have been hacking into ML models for almost a decade. I say _almost_ because my wife says 8 years is not a decade. Recently, we turned our attention to stopping AI cheating during interviews: https://blind-spots.ai/
Here’s how we did it:
When interviewing for summer Interns, I had a weird feeling that the candidates were cheating. There was one candidate in particular who would constantly look at the corner of the screen every time I'd ask him a question. Maybe it was my paranoia (because of all the interview cheating posts I was seeing on my social media) but I had a feeling that the person was cheating.
We looked at the cheating prevention/detection solutions on the market. Most of them there rely on heuristics (eye tracking, measuring speech inflections) or spyware (keystroke loggers). These things are super intrusive, not to mention, incredibly fragile. The chance of false positives is non-trivial. God forbid I become nervous during my interview and have to look around.
We wanted to take a different approach from current solutions. We relied on our experience hacking into ML models, specifically via adversarial examples. Here, we make special “invisible” pixel changes so that when the AI cheating tool screenshots the interview question, the pixels force the underlying model to refuse to answer, or even output an incorrect solution. For audio based cheating, we made small, targeted perturbations in the spectral domain that caused the AI assistant to mistranscribe the question entirely.
It took us a few weeks to implement the first prototype. However, that's when we ran into our first major hurdle. The pixels that could break one cheating tool, would not work against others. This was frustrating because we couldn't figure out why this was the case. In fact, we almost called it quits. However, after a few weeks of experiments, we found two cultiprits. (1) Different underlying LLMs: For example, Cluely likely uses Claude and InterviewCoder uses some variant of the GPT family. Each model requires different pixel change strategies. (2) System Prompts: The pixel changes are impacted by system prompts used by the cheating tool. Since each tool has a different variation of the system prompt, it requires different pixel change methods.
Our dream was to build a “one-size-fits-all” attack. It took months of iteration and hundreds of experiments to build something that worked against ALL cheating tools.
Along the way, we extended our method to defeat audio cheating. Here, an AI assistant listens to the interviewer and writes back answers on the hidden screen. Making those spectral changes in real time (milliseconds, not hours) was a technical nightmare, but we got there.
In short, after hundreds of experiments and a few months of stubborn engineering, we built a low-friction layer that breaks the “screenshot-and-ask” and audio-proxy workflows used by cheating tools without invading candidate privacy or relying on brittle behavior heuristics. We productized those defenses as BlindSpꙨts (https://blind-spots.ai) so teams can protect live interviews in real time, with minimal false positives and no keystroke or camera spyware. If you run technical interviews, ask us for a demo!
Check out the video here: https://www.youtube.com/watch?v=wJPfr5hIl10
This is amazing!