Rabbit Brain: Attractor Geometry for Neural Representation Learning

NeurReps @ NeurIPS 2025 - San Diego, CA
Author: Jhet Chan
Affiliation: Independent Researcher
Contact: research@jhetchan.com


Rabbit Brain is a recurrent, bounded, dissipative iterative map that replaces architectural depth with temporal refinement. Instead of increasing parameters with depth, Rabbit Brain uses a fixed-size state vector and a single nonlinear update rule to iteratively sculpt the representation geometry. At NeurReps 2025, we present Rabbit Brain v0.5, a tanh-bounded orthogonal recurrence that produces stable attractor basins and competitive classification accuracy on chaotic dynamical tasks. This page contains the artifact links, poster, and resources for the workshop audience.


Resources

Poster PDF (NeurReps 2025) Source Code (GitHub) OpenReview Submission (RB v0.1) Extended Version (arXiv cs.LG) — Coming Soon

Stay Updated

Enter your email to receive:

(Only major research updates.)

Emails stored securely in Cloudflare KV. Used only for research updates.


What's New in Rabbit Brain v0.5


Technical Overview

Rabbit Brain implements the iterative map

zt+1 = tanh(Wrec zt + Win et + b)

a bounded, dissipative system producing emergent attractor geometry. Iteration depth plays the role of representational refinement, enabling a small recurrent core to approximate complex manifolds over time. This mapping shares properties with Hopfield-type energy descent but operates without an explicit energy functional.


Position in the Literature

Rabbit Brain sits at the intersection of attractor networks, implicit depth models (DEQ/Neural ODE), and reservoir computing. Our contribution is showing that a simple bounded recurrence, iterated over sufficient computational time, yields competitive representations and rich attractor geometry without explicit depth.


If you are a researcher active in cs.LG or related areas and find this line of work promising, I would welcome feedback or a short comment via email.


← Back to home