Artificial intelligence memory: absolute memory?
Although our memory struggles to remember simple details like the food we ate a month ago, the computer is capable of recording everything without limit. A Solid-state drive, or SSD (very fast hard disk) produced by the Seagate company, even reached 60 terabytes of storage data in 2016. It is enough to archive several thousand videos in high definition. And the new AIs are gathering millions! Using these huge databases, they are constantly improving. It’s hard not to feel inferior in the face of this seemingly endless digital memory… But in the digital sector, the opposite is true. Far from being disgraced, human memory even inspires engineers who struggle to replicate its complexity.
>> Read also: Development of brain implants
Artificial intelligence memory: Mimicking the brain
“The point at which AI gets stuck coincides with what we don’t know about our own memory” , analyzes Frédéric Alexandre, director of the Mnemosyne project team at the National Research Institute for Digital Science and Technology (Inria). A former computer engineer is well placed to understand the importance of the human. While majoring in artificial intelligence, he found himself “stuck” in the early 2000s.
He then goes to the neurology laboratory in Bordeaux to seek answers to the questions. Thus was born Mnemosyne, an alliance between computer engineers and hospital neurologists. Together, they try to understand our memory using algorithms… and improve its performance by what they decipher from our brains. What it takes to advance medically. “We are improving our knowledge of neurodegenerative diseases like Parkinson’s or Alzheimer’s,” Frederic Alexandre rejoices.
But it’s not just Mnemosyne: Google, Apple, Meta… all the major AI makers are now trying to mimic our brains. With sometimes spectacular breakthroughs like autonomous cars that can visualize danger faster than a human or Midjourney AI that can create a work of art from a simple sentence. One of them even won a digital art competition in Colorado with his work Space Opera Theater … and sparks heated debate among graphic designers. But this progress, which seems great to us, is actually very limited. “Nowadays the question is not about improving performance, but about designing truly intelligent systems” Frédéric Alexandre analyzes.
HEBB’S RULE
The secret of intelligence is in the neural network of our brain. With each sensory stimulation, they activate their electrical currents (controlled by a subtle ionic mechanism of potential difference) and release neurotransmitters, molecules that pass information from one neuron to another and form our memory. How can such a biological mechanism be replicated in artificial intelligence?
>> Also Read: Can Your Smartphone Tell If A Bridge Is In Good Condition?
“We use models that, although incomplete, allow us to approximate the activity of neurons” Mnemosyne assures the director. Each neuron in our brain can connect to an average of ten thousand others. According to the director of research at Inria, a figure that looks impressive but hides a huge gap: “We have hundreds of billions of neurons, so very few neurons are connected!”
“Thinking, creativity, problem solving or planning, these higher functions remain inaccessible to AI”
However, here’s the key to our memory: if each neuron is connected to the others, they don’t all fire at the same time. Thus, those associated with visual activity will be activated at the same time as auditory neurons, for example, when we see fireworks, a memory will be formed by the simultaneous activation of these areas. Equations allow you to associate each neuron with the sum of its connections and change the value of each. Known as Hebb’s rule, this rule was proposed by Donald Hebb in 1949, long before the advent of artificial intelligence!
© H. RAGUET / INRIA – SPL
These types of neural model algorithms are at the heart of new AIs that combine different experiences gathered according to different forms of learning. So there are unsupervised versions where the algorithm tries to detect similarities until it gets the right result, like trying to detect a family of birds in full flight. Others are more complex, e.g “supervised layer networks”, “Where medical information can be linked to pathologies” . These are called deep networks deep learning already showing great results.
In oncology, they can help a doctor identify tumors or moles to watch out for. However, these algorithms simulate “simple cognitive functions” With the acknowledgment of Frédéric Alexandre. “Thinking, creativity, problem solving or planning, these higher functions remain inaccessible to AI”
Know-how, that is, the difference between learning basic tasks and “knowing it,” which is reserved for our species. However, according to Inria’s research director, this obstacle will soon be overcome: “All the big AI players are working on it.”
“We use models that, although incomplete, allow us to approximate the activity of neurons”

© H. RAGUET / INRIA – SPL
Unlike our biological complexity, computer memory is fundamental. A hard disk is a physical surface, in this case a flat disk on which data is engraved, a code translated into a sequence of 1’s and 0’s. This binary can be easily read with a laser, with the ability to remove and replace these combinations. them with others.
Unless the medium is destroyed, this data is preserved even without power supply, which is not the case with random access memory known as RAM (random access memory), giving the computer access to all the files it needs to run the program. It is kept in a small rectangular stick (above), time to be used before discharge. If we were to dare to make a comparison, the hard drive would be our deep memory storage and the RAM would be our short-term memory.