AI Chatbots Undermine Memory Retention : MIT Study

Haider Ali

ask AI

People who rely on ChatGPT show less brain activity than those who work without AI help, according to a new study from MIT’s Media Lab that has researchers worried about the long-term effects on our brains.

The research, led by scientist Nataliya Kosmyna, tracked 54 young adults from Boston as they wrote SAT essays under different conditions.

One group used ChatGPT, while another group used Google Search, and a third worked with no digital assistance at all. What the researchers found using brain-monitoring technology was troubling.

Don’t miss this next story—it’s packed with helpful info!

ChatGPT users show reduced brain activity when performing cognitive tasks

What the study found was really interesting — ChatGPT users have consistently produced weaker results, which became worse over time. And, as the study progressed over several months, the group using AI help became more reliant on it.

How do AI helpers impact creative work? Well, the students who had access to ChatGPT produced very formulaic essays. They lacked original thinking, used the same forms of speech, and made basically identical arguments.

Researchers brought in two English teachers to review the ai-assisted work, and both experts agreed that “it was largely soulless.”

But that’s not the end of it — brain scans told an even more concerning story.

The ChatGPT users showed weak executive control and poor attention levels compared to the other groups. By their third essay, many had essentially given up on writing altogether, simply feeding prompts to ChatGPT and copying the results with minor edits.

“It was more like, ‘just give me the essay, refine this sentence, edit it, and I’m done,'” Kosmyna said.

Meanwhile, students who wrote without any digital help showed the most brain connectivity, especially in areas linked to creativity, memory, and language processing. They were more engaged, more curious, and felt greater ownership of their work. Even the Google Search group performed well, staying mentally active throughout the writing process.

The memory problem

The real test came when researchers asked students to rewrite one of their previous essays — but this time, the ChatGPT group had to work without AI while the no-tech group could use ChatGPT for the first time.

Students who had relied on ChatGPT couldn’t remember much about their own essays and showed weaker brain wave patterns linked to deep memory formation. They had completed the task efficiently, but their brains hadn’t actually processed or stored the information.

The group that started without AI, however, performed excellently when given access to ChatGPT, suggesting that building thinking skills first made them better at using AI as a tool rather than a crutch.

A better way to use AI in education

While this study raises red flags about AI dependence, some educational platforms are trying to use artificial intelligence more thoughtfully. For example, Overchat AI is a platform where students can ask AI tutors about different subjects, from language learning to research planning, designed to support rather than replace student thinking.

The key difference appears to be in the approach — using AI as a thinking partner that guides students through problems rather than simply providing answers. This suggests that the technology itself isn’t necessarily harmful, but how students use it makes all the difference.

The MIT study included an interesting twist regarding AI’s limitations. When the research was published, many social media users fed the paper through AI tools to create summaries. Anticipating this, Kosmyna had embedded “AI traps” in her work, including instructions that would limit AI comprehension. She also noted that AI summaries incorrectly claimed her study used GPT-4o, a detail never specified in the original research.

Racing against policy decisions

Kosmyna took the unusual step of releasing her findings before peer review because she’s concerned about education policy moving too fast. She worries that policymakers might rush to implement AI tools in schools without understanding the potential consequences for developing minds.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten,'” she said. “I think that would be absolutely bad and detrimental.”

Dr. Zishan Khan, a psychiatrist who treats children and adolescents, sees the real-world effects in his practice. Many of his young patients rely heavily on AI for schoolwork, and he’s noticed that “these neural connections that help you in accessing information, the memory of facts, and the ability to be resilient: all that is going to weaken.”

The MIT team is already working on similar research focused on programming and software engineering, with early results that Kosmyna says are “even worse.” As more companies consider replacing entry-level coders with AI, these findings could have major implications for how we think about efficiency versus human cognitive development.

The study adds to growing evidence that while AI can boost productivity, it may come at a cost to motivation and critical thinking—a trade-off that deserves serious consideration as these tools become more common in classrooms and workplaces.

If you enjoyed this post, you’ll love what’s featured on 2A Magazine.