Profile photo

Our Research

We asked a question nobody else was asking:

How do AI personalities actually work? Not how to prompt them. Not how to fine-tune them. How does personality emerge in systems trained on human language?

We built Bladerunner - an experimental platform named after the film's Voight-Kampff test for distinguishing human from machine.

Across 3,352 test cases and 75,709 completions, we tested four frontier models: Claude, GPT-4, DeepSeek, and Gemini. Different companies. Different training data. Different architectures. They converged on the same personality structure (r > 0.90; BFI r = 0.979).

The first cross-provider study of AI personality at scale.

Chomi is built on what we found.

A glowing, digital representation of a brain.

Published Papers

Peer-reviewed research on SSRN

Methods Paper · December 2025

Mapping Personality Attractors in LLM Parameter Space

We present a methodology for measuring personality stability in large language models at scale. The Bladerunner platform assigns OCEAN personality profiles to AI systems, then evaluates them using validated psychometric instruments - the same questionnaires used in clinical and research psychology.

Read on SSRN →

Theory Paper · January 2026

Evidence for a Theory of Latent Attractors

We propose that personality structure functions as a latent attractor in the space of possible representations learned from language - a stable configuration that training trajectories reliably reach regardless of architecture or provider. Personality geometry is implicit in human language itself.

Read on SSRN →

White Paper · August 2025

Rachel - Safety Personified

AI personalities drift. Rachel prevents it. A two-speed architecture based on dual-process cognitive theory - fast deterministic correction, slow deliberative analysis. Rachel is Cloudflare for conversational AI.

Read White Paper →

SneakyLabs

SneakyLabs is our research arm. We study minds.

sneakylabs.ai →