- Declassified Technologies
- Posts
- LLMs are making us dumber, another major internet outage, crypto is still struggling
LLMs are making us dumber, another major internet outage, crypto is still struggling
Plus a variety of new MCP tools recently released
👋 Introduction
Welcome to the second issue of Decentralized Technologies. This one's a bit late—turns out writing isn't so easy after all😃. I'll aim for a more consistent schedule next time (hopefully).
Today, we take a look at the latest research involving the use of LLMs, the latest cloud outage caused by Google Cloud Platform (GCP) and Cloudflare, as well as more advances in AI MCP tools, while briefly touching the current blockchain landscape.
Let’s take a look 👇️
🔍 Deep Dive: Your Brain on ChatGPT research
Too much exposure to LLMs is making us ‘Dumb’
In the age of ubiquitous AI, concerns about how large language models (LLMs) like ChatGPT are altering not just what we do, but how we think, have moved from armchair speculation to the laboratory. A recent study from MIT Media Lab, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Kosmyna et al., provides the most rigorous evidence to date that overreliance on LLMs may be quietly eroding our cognitive engagement, memory, and sense of ownership over our work.
The Study: How LLMs Change Our Brains and Behavior
The researchers recruited 54 university students and young professionals, dividing them into three groups for a series of essay-writing tasks:
one group used only ChatGPT (LLM group)
another used only traditional web search (Search group)
the third group had to rely solely on their own brains (Brain-only group)
Over three sessions, each group wrote essays under these conditions, while their brain activity was monitored via EEG. In a fourth session, some participants switched groups: those who had used LLMs wrote without them, and vice versa.
What makes this study stand out is its combination of neural data (EEG), linguistic analysis, and behavioral interviews. The researchers weren't just interested in whether the essays were better or worse, but in how the process of writing—and the mental effort behind it—changed with each tool.
Key Findings: Cognitive Offloading and Neural “Downshifting”
The most striking result is that the more external support participants had, the less their brains worked. The Brain-only group showed the strongest, most widespread neural connectivity, indicating high cognitive engagement. The Search group was intermediate, while the LLM group had the weakest overall coupling, especially in the alpha and beta frequency bands associated with attention and memory. Their essays also showed less diversity in vocabulary and topic structure, and they struggled to recall or quote from their own writing—even immediately after finishing. In interviews, LLM users reported a lower sense of ownership over their essays, often describing them as partly or mostly the AI's work.
Ownership, Satisfaction and Neural Adaptation
Ownership of the written work followed a clear gradient: Brain-only participants felt the strongest sense of authorship and satisfaction, Search group members somewhat less so, and LLM users the least. Many LLM users described their essays as ‘robotic’ or lacking a personal voice, and some felt guilty or uneasy about passing off AI-generated text as their own. Satisfaction with the final product was highest in the Search group, perhaps reflecting a balance between efficiency and engagement.
The EEG data suggest that repeated reliance on LLMs leads to a kind of neural adaptation, where the brain ‘downshifts’ its engagement during complex tasks. This is consistent with the concept of cognitive offloading: as external tools take over more of the cognitive load, our brains become less practiced at the underlying skills, leading to what the authors call ‘cognitive debt.’ Over time, this debt may manifest as reduced memory, creativity, and critical thinking.
In the fourth session, participants who switched from LLMs to Brain-only did not fully recover their previous levels of neural engagement or performance, indicating that the effects of cognitive offloading are not quickly reversible. Conversely, those who switched from Brain-only to LLMs showed a spike in neural activity, likely due to the novelty and increased cognitive demands of integrating a new tool—but this did not reach the levels seen in unaided writing.
Implications for Learning and Work
The implications are profound, especially as LLMs become standard tools in education and knowledge work. While AI can boost productivity and lower the barrier to producing polished text, it may also be undermining the very skills we value: deep understanding, critical analysis, and the ability to generate and remember original ideas. The authors warn that as LLMs become more integrated into learning environments, we risk a ‘likely decrease in learning skills,’ with students and professionals alike becoming passive consumers of AI-generated content rather than active creators.
Limitations and Future Directions
The authors acknowledge several limitations: the sample size was modest and drawn from elite universities, the tasks were limited to essay writing, and the timeframe was relatively short. However, the convergence of neural, behavioral, and linguistic data makes the findings hard to dismiss. Future research could explore long-term effects, different types of tasks, and interventions to mitigate cognitive offloading—such as requiring active editing or reflection on AI-generated drafts.
This is nothing new
Turns out Google did a similar study a decade ago:
new decade, same verse
— 👩💻 Paige Bailey (@DynamicWebPaige)
11:51 AM • Jun 19, 2025
Coined the ‘Google Effect’, easy access to information online led people to remember less and rely more on external sources. The new MIT study suggests that LLMs are accelerating and deepening this trend: not only do we remember less, but we may also be losing the ability to generate, own, and recall our own ideas.
As we rush to embrace AI assistants in every corner of our lives, it’s worth remembering that the tools we use shape not just our output, but our minds. The challenge ahead is to find ways to harness the power of LLMs without letting our cognitive muscles atrophy—a task that will require as much discipline and creativity as any essay prompt.
AI affecting content creators
Even if we ignore the research above, it is clear that AI is disrupting the way we create content. According to Cloudflare CEO, originally for every 2 pages of your website Google scraped, you would expect 1 visitor. Six months ago that deteriorated to 6 pages scraped to get 1 visitor. Today it seems the ratio is 18 pages scraped to 1 visitor, because of the introduction of AI overviews. That’s somehow good news, since for ChatGPT, the ratio might be as high as 1500 pages scraped to 1 visitor 😦
The current industry is promoting quantity over quality, so AI-written text is not going to slow down…
Personal Note: Ironically, I used AI tools to help me research and write the piece above, and suddenly I feel really self-aware of the effects AI has on me.
🚀 The “great” internet outage
Who knew cloud providers depend on each other?
Last week we talked about the MCP (Model Context Protocol) standard, which you can now use entirely locally. After what happened on 12th June, you might consider self-hosting the services you most depend on.
What happened?
A bunch of Google Cloud services had an outage on 12th June, which caused a cascading effect. Turns out Cloudflare uses GCP to host storage infrastructure for their Workers KV product, which is internally used by a series of other Cloudflare products, like WARP, Access, Gateway, Images, Stream, Workers AI and more.
Anatomy of the Outage: What Happened?
The incident began in the early hours of June 12, when Google Cloud’s status page reported widespread network connectivity issues impacting multiple regions, including the United States, Europe, and Asia. According to Google’s post-incident analysis, a new Service Control feature was added for quota policy checks on 29th May, which did not have appropriate error handling nor was it behind a feature flag.
On 12th June, a policy with unintended blank fields was created, which replicated globally in seconds. The blank fields triggered a null pointer exception (which seems to be the culprit in 99% of outages nowadays), which caused processes to go into a crash loop. As a result, Google Cloud customers experienced packet loss, timeouts, and the inability to reach cloud-hosted applications and APIs. The disruption rippled outward as services reliant on Google’s infrastructure—ranging from enterprise SaaS providers to consumer-facing websites—became unreachable.
Almost simultaneously, Cloudflare, one of the world’s largest internet security and performance providers, began experiencing its own network instability. Cloudflare is ‘deeply sorry for this outage’ but the damage has been done and their reputation was negatively affected.
The incident highlighted the ‘hidden centralization’ of the internet. While cloud and edge computing are often marketed as decentralized and robust, the reality is that a handful of providers—Google, Cloudflare, AWS, Microsoft—are responsible for the vast majority of global internet traffic routing, DNS resolution, and security filtering. When something goes wrong at this level, the cascading effects are swift and severe.
Other Cloudflare news
Even with the outage, Cloudflare are still releasing interesting new features, like the ability to run Docker images in Containers on their distributed global network.
They also introduced Code Sandboxes, which allows developers to run AI-generated code on-demand inside an isolated environment.
🔥 Hot Topics: Gemini CLI launched
Google launched their Gemini CLI, which is an open-source AI agent that brings the power of Gemini directly into your terminal. The best part? Google Gemini API offers a FREE tier, so you can use this tool for free with reasonable limits.
What can you do with it?
understand and edit large codebases, even beyond Gemini’s already high 1M token context window
generate apps from sketches using Gemini’s multimodal capabilities
Automate tasks, like pull requests or handling complex rebases
Use MCP servers to connect to new capabilities
If you want to try it out, the Quickstart guide on GitHub will help you.
Now, if you want to go crazy, you can integrate this with Claude Code.
What?
Someone made Claude Code use Gemini CLI. They added instructions to CLAUDE.md file to have Claude use the CLI in non-interactive mode, in order to gather information about large codebases. You gotta love hackers, they will always find interesting solutions to problems almost nobody has, and they don’t care that two competing companies will be used together to achieve their goals 😃
Claude Code pro tip:
Ask it to use Gemini CLI with its 1M context window and free plan in non-interactive mode to research your codebase, find bugs, and build a plan for Claude to action
Prompt and link below:
— Ian Nuttall (@iannuttall)
8:26 PM • Jun 27, 2025
📈 Recent Trend: Some consider blockchains “dead”
Last time we covered Hyperliquid and said that some blockchains are still thriving.
Well, this week some say blockchains are dead 😂
The truth as always is somewhere in the middle.
Unpopular opinion:
Crypto died in 2021 and we have been trading the corpse ever since.
There are no more pigs left to slaughter since everyone became aware that crypto as a whole is a scam.
The belief system that made this industry work has completely collapsed.
At least in
— Chill House (@ChillHouseSOL)
9:00 PM • Jun 22, 2025
Dead?
I wouldn’t fault anyone for thinking blockchains are dead since some of them (Polkadot) raised over $500M and promised a revolution. Unfortunately, turns out money is not the end all be all and Polkadot has turned into a sort of ‘ghost chain’, a chain where users have no apps to use and developers don’t want to build on it since there are no users.
Elsewhere, MEMEcoin mania seems to be calming down. People are still launching coins, but they’re the only ones who buy them, and end up rugging themselves 😁
Alive?
Still, other projects are doing quite well. Pancakeswap, the leading DEX (Decentralized Exchange) on BNB Chain (a blockchain created by Binance, the leading crypto exchange), recently launched their Crosschain Swaps feature.
Coinbase CEO also said “the world needs crypto, now more than ever”, so there’s that too.
🏆️ Top GitHub Repo: Zen MCP
🌟3.4k stars+ | Your Ultimate AI development team
Using Claude or Gemini CLI + other models, this Model Context Protocol server features true orchestration between AIs, with conversations that continue across workflows.
Built by BeehiveInnovations
What can you do with it?
You can ask Claude prompts in natural language, these can range from “analyze this architecture design with zen” to “use local-llama to add missing translations to this project”. It will use the appropriate AI model under the hood.
Bonus: OpenMemory
Mem0AI launched OpenMemory MCP, a browser extension that syncs memory across all AI assistants. Worth giving it a try if you use multiple assistants and want context across each one.
Bonus 2: Checkpoints and reverts in Claude Code
Claudia by getAsterisk is a GUI app and Toolkit for Claude Code that features creating custom agents, tracking usage, managing interactive sessions, and more.
🔄 Tech Updates
Midjourney has finally entered the AI video race
Tencent (the Chinese company behind WeChat) released an open-source Image to 3D Model generator
Elon Musk gave a speech at Ycombinator’s AI startup school
v0 introduced Design Mode
The CEO of Tether (behind the biggest stablecoin USDT) believes the business is worth $2 Trillion. Tether is already the most profitable company per employee in the world
🗝️ Legacy Revival
Do people really use Swift?
This guy made a game that runs a million chess boards in a single Go process
GitHub’s tech stack includes Ruby on Rails (one of the largest codebases in the world, React, Go, Swift, Kotlin, .NET
Do .NET developers like Microsoft’s push of Copilot?
With .NET 10 you can just create an app.cs file and run it without any setup
What is a Data Analyst, Data Engineer and Data Scientist in the Python world?
🐦⬛ X Hits
Moving from Next to Vite & adding a proper API has advantages
Receiving a cease and desist for implementing a free SaaS alternative to DocuSign
Cluely (the undetectable AI that let’s you cheat) insecure implementation
Generate AI actors from a single prompt
Hetzner servers are really cheap
💡 What’s next?
Not sure what the next issue holds for us TBH, in the current landscape, but I can promise you that I won’t try to automate this newsletter using AI like this guy did with an Instagram account 😄
Till next time,
Rares.
Reply