Rabbit Brain: Attractor Geometry for Neural Representation Learning
Rabbit Brain is a recurrent, bounded, dissipative iterative map that replaces architectural depth with temporal refinement. Instead of increasing parameters with depth, Rabbit Brain uses a fixed-size state vector and a single nonlinear update rule to iteratively sculpt the representation geometry. At NeurReps 2025, we present Rabbit Brain v0.5, a tanh-bounded orthogonal recurrence that produces stable attractor basins and competitive classification accuracy on chaotic dynamical tasks. This page contains the artifact links, poster, and resources for the workshop audience.
Resources
Stay Updated
Enter your email to receive:
- The extended arXiv cs.LG version
- Reproducibility notes
- RB v0.6+ updates
(Only major research updates.)
Emails stored securely in Cloudflare KV. Used only for research updates.
What's New in Rabbit Brain v0.5
- Replaced fractal/Möbius dynamics with stable tanh-bounded recurrence
- Avoids blow-up and NaN cascade seen in Julia-set iteration
- Produces clean attractor basins in latent space
- Constant-parameter complexity with respect to computational depth
- Demonstrated on double-well + chaotic systems
Technical Overview
Rabbit Brain implements the iterative map
zt+1 = tanh(Wrec zt + Win et + b)
a bounded, dissipative system producing emergent attractor geometry. Iteration depth plays the role of representational refinement, enabling a small recurrent core to approximate complex manifolds over time. This mapping shares properties with Hopfield-type energy descent but operates without an explicit energy functional.
Position in the Literature
Rabbit Brain sits at the intersection of attractor networks, implicit depth models (DEQ/Neural ODE), and reservoir computing. Our contribution is showing that a simple bounded recurrence, iterated over sufficient computational time, yields competitive representations and rich attractor geometry without explicit depth.
If you are a researcher active in cs.LG or related areas and find this line of work promising, I would welcome feedback or a short comment via email.