Memory capacity of the human brain is 10x more than previously thought

New insights into the size and function of neural connections, is drastically revising our understanding of the brain’s memory capacity.

This discovery reshapes our knowledge of how the brain efficiently stores vast amounts of information and offers promising avenues for developing powerful, energy-efficient computers.

This discovery reshapes our knowledge of how the brain efficiently stores vast amounts of information and offers promising avenues for developing powerful, energy-efficient computers. (CREDIT: CC BY-SA 3.0)

Researchers from the Salk Institute, in collaboration with other scientists, have uncovered groundbreaking insights into the size and function of neural connections, drastically revising our understanding of the brain’s memory capacity. This discovery not only reshapes our knowledge of how the brain efficiently stores vast amounts of information but also offers promising avenues for developing powerful, energy-efficient computers.

“This is a real bombshell in the field of neuroscience,” says Terry Sejnowski, a Salk professor and co-senior author of the study published in eLife.

The team’s findings reveal a key design principle that allows neurons in the hippocampus to operate with low energy while maintaining high computational power. Their new measurements suggest that the brain's memory capacity is at least a petabyte, roughly equivalent to the storage capacity of the World Wide Web.

Memory and thought processes result from intricate patterns of electrical and chemical activity within the brain. These processes primarily occur at synapses, the junctions where neurons communicate. Here, an output signal from one neuron, carried by an axon, connects to an input signal of another neuron, conveyed by a dendrite.

In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. (CREDIT: Salk Institute)

Neurotransmitters carry signals across the synapse, influencing whether the receiving neuron will transmit an electrical signal to other neurons. Each neuron is connected to thousands of others through these synapses.

“When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, a co-senior author of the study and a professor of neuroscience at the University of Texas, Austin. Harris had hoped to uncover fundamental principles about brain organization, but the precision achieved in this analysis far exceeded her expectations.

Although much is still unknown about synapses, it’s clear that their dysfunction can lead to various neurological disorders. Larger synapses, characterized by more surface area and greater vesicle content, tend to be stronger and more likely to activate surrounding neurons than smaller ones.

During their research, the Salk team constructed a 3D reconstruction of rat hippocampus tissue, the brain’s memory center, and observed something unusual. In about 10% of cases, a single axon from one neuron formed two synapses that connected to a single dendrite of a second neuron. This observation suggested that the first neuron might be sending a duplicate message to the receiving neuron.

Initially, this duplicity didn’t seem significant. However, Salk staff scientist Tom Bartol saw an opportunity to investigate synaptic sizes more precisely. By measuring the difference between two nearly identical synapses, the team hoped to gain new insights into synaptic categorization, which had previously been limited to classifications of small, medium, and large.

Using advanced microscopy and computational algorithms, the researchers imaged rat brain tissue and reconstructed its connectivity, shapes, volumes, and surface areas down to a nanomolecular level. They expected some variance in synapse sizes but were surprised to find that the size differences were minimal.

“We were amazed to find that the difference in the sizes of the pairs of synapses was very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature,” Bartol explains.

The size of synapses is crucial for memory capacity, and this eight percent difference became a key figure in the team’s algorithmic models. Previously, the size range between the smallest and largest synapses was known to be a factor of 60, with most synapses being relatively small.

However, with the new understanding that synapses can vary in size by as little as eight percent, the researchers realized there could be as many as 26 different size categories of synapses, rather than just a few. “Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, the brain was thought to store just one to two bits of information for short- and long-term memory in the hippocampus.

“This is roughly an order of magnitude of precision more than anyone has ever imagined,” adds Sejnowski.

Presynaptic docked vesicle numbers are correlated with PSD areas, spine head volumes, and neck diameter, but not with neck length. (A) All 31,377 presynaptic vesicles. (B) En face view of the 24 docked vesicles (gray spheres) viewed through an axon (green) onto the PSD (red) of example spine (yellow). (CREDIT: eLife)

This level of precision is particularly surprising given that hippocampal synapses are known for their unreliability. When a signal travels from one neuron to another, it only activates the second neuron 10 to 20 percent of the time.

“We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” Bartol notes. The answer seems to lie in the continuous adjustment of synapses, which balance their success and failure rates over time. Using their new data and a statistical model, the team calculated how many signals it would take for a pair of synapses to achieve that eight percent size difference.

They found that for the smallest synapses, about 1,500 signaling events (taking about 20 minutes) cause a change in size, while the largest synapses only require a couple of hundred events (one to two minutes) to change.

“This means that every 2 or 20 minutes, your synapses are going up or down to the next size. The synapses are adjusting themselves according to the signals they receive,” Bartol explains.

From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. (CREDIT: Salk Institute)

Harris emphasizes the broader impact of these findings: “Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers.”

The findings not only open new research avenues into learning and memory mechanisms but also prompt questions about whether similar rules apply to synapses in other brain regions and how these rules evolve during learning.

“The implications of what we found are far-reaching,” adds Sejnowski. “Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.”

This newfound precision also offers a compelling explanation for the brain’s remarkable energy efficiency. The human brain generates only about 20 watts of continuous power—roughly the same as a dim light bulb. These discoveries could help computer scientists design ultra-precise yet energy-efficient computers, particularly those using deep learning and artificial neural networks for complex tasks like speech recognition, object identification, and translation.

“This trick of the brain absolutely points to a way to design better computers,” Sejnowski concludes. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.”

Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.

The work was supported by the NIH and the Howard Hughes Medical Institute.

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joseph Shavit
Joseph ShavitSpace, Technology and Medical News Writer
Joseph Shavit is the head science news writer with a passion for communicating complex scientific discoveries to a broad audience. With a strong background in both science, business, product management, media leadership and entrepreneurship, Joseph possesses the unique ability to bridge the gap between business and technology, making intricate scientific concepts accessible and engaging to readers of all backgrounds.