Attractor and integrator networks in the brain
Author(s)
Khona, Mikail; Fiete, Ila R
DownloadSubmitted version (19.16Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified. Thus, it is now possible to conclusively state that the brain constructs and uses such systems for computation. Finally, we highlight recent theoretical advances in understanding how the fundamental trade-offs between robustness and capacity and between structure and flexibility can be overcome by reusing and recombining the same set of modular attractors for multiple functions, so they together produce representations that are structurally constrained and robust but exhibit high capacity and are flexible.
Date issued
2022-12Department
Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesJournal
Nature Reviews Neuroscience
Publisher
Springer Science and Business Media LLC
Citation
Khona, Mikail and Fiete, Ila R. 2022. "Attractor and integrator networks in the brain." Nature Reviews Neuroscience, 23 (12).
Version: Original manuscript