AI systems have detectable behavioral signatures that can be used to improve bot detection. Roundtable's Proof-of-Human API verifies proof-of-human invisibly, continuously, and instantaneously.
Google reCAPTCHA v3
To illustrate, we tested OpenAI's Operator agent
Today, LLMs from companies like OpenAI and Anthropic repeatedly pass as humans in the classic Turing Test
Behavioral methods leverage the unique patterns in how humans physically interact with computers
Try out the interactive demos below to see the difference between human and bot behavior:
Compare bot vs human typing patterns in real-time. See how AI agents exhibit different keystroke timing signatures compared to natural human typing.
Compare bot vs human mouse movement patterns during form interactions. Observe how AI agents exhibit different cursor trajectories compared to natural human movements.
How much can these behavioral patterns be spoofed? This remains an ongoing question, but the evidence to date is optimistic. Academic studies have found behavioral biometrics to be robust against attacks under adversarial conditions
The underlying reason appears to be cost complexity. After all, fraud is an economic game. Traditional credentials like passwords or device fingerprints are static, finite, and easily replayed, whereas behavioral signatures encode fine-grained variations that are difficult to reverse-engineer. While AI agents can theoretically simulate these patterns, the effort likely outweighs other alternatives.
To further illustrate the point, we can extend the challenge: can a bot completely replicate human cognitive psychology?
Take for example the Stroop task
Try out the interactive demo below to see the difference between human and bot behavior:
Compare bot vs human cognitive interference using the classic Stroop task. Observe how humans show slower responses during conflicting stimuli while bots maintain consistent performance.
The Stroop and other canonical experimental paradigms provide an additional obstacle to AI agents. To completely replicate human cognitive psychology, an AI agent would not only need to simulate our cognitive outputs, but our cognitive processes as well. These processes are a function of our neural and environment constraints. While someone can easily create a Stroop Bot that replicates human biases, fully replicating end-to-end human processing is a hard and unsolved problem.
4We first started identifying bots in graduate school for online data collection and model training. We would create simple tasks that humans could complete but bots would struggle with. One example was the Boston Temperature Test: guessing the twelve monthly average high temperatures in Boston.
Humans err in predictable ways but follow macro seasonal patterns, whereas bots and AI agents were either completely random or too perfect. Figure 5 plots the individual user curves estimated for each agent type.
Once we have a mapping between agent type and temperature estimates, we can use it to create a confidence score for any given user.
While we can't bombard Internet users with experimental stimuli (e.g. Stroop or the traditional CAPTCHA
Similar detection logic can be applied using other behavioral patterns like mouse movement, click behavior, and scroll tracking or even more cognitively demanding stimuli like those that appear on an e-commerce site or a React app.
We offer this behavioral and cognitive approach for bot detection and cybersecurity. Rather than focus on privacy invasive methods like biometrics scans or cookie tracking, the Roundtable Proof-of-Human API presents online agents an economic challenge: replicate the full range of human cognition naturally and continuously.
Mayank Agrawal and Mathew Hardy are co-founders of Roundtable Technologies Inc., where they work on building behavioral and cognitive proof-of-human systems.