Computer bytes and human bits

Long Live the Internet

The internet, as we know it, is dying.

Soon, the internet we all hold dear will be no more. And the only obituary you'll be able to read online for free will be AI-generated.

I think that the internet of the future won't be one unified entity; it will consist of two parts. The first part will be your inbox and the apps — the places where you're able to talk to real humans.

The second part will be AI-generated content. This is the part the dead internet theory tells you about. The primordial soup of AI crap and bot activity. Maybe AGI will appear there? Who knows.

This great divide started a while ago with the fall of RSS and the advent of big hubs. Big hubs like Facebook, Google, Amazon, Instagram, etc., captured people's minds and put content behind their walls. Instead of interconnected knowledge and network, the internet started turning into an old broadcast TV, where you have three or four main channels you have to tune into, and that's it. After that, people moved on to walled messenger apps for one reason or another; it doesn't matter. More and more important conversations and content started happening in there instead of old-school forums. Yes, the content on Facebook, Instagram, Telegram, WhatsApp, etc., is online. But can you still say that it's on the internet? I'm not so sure.

With the appearance of generative AI, it's gotten only worse. The generative AI is the King Midas of our days. Everything it touches – text, voice, images – turns into gold for shareholders, but in the grand scheme of things, it's a curse. Take this story from 404Media, for example:

In December, we noticed that articles we spent significant amounts of time on—reporting that involved weeks or months of research, talking to and protecting sources, filing public records requests, paying for and parsing those records, hours or days of writing, editing, and packaging—were being scraped by bots, run through an AI article “spinner” or paraphraser, and republished on random websites. — Source

I'll be following the copyright stories about OpenAI and Google with great interest, but let's not pretend that big corporations are our friends. What I don't understand here are the individual entrepreneurs trying to make a quick buck out of it. They're like cheaters in an online game — trying to get a quick dose of dopamine while simultaneously ruining the great thing and community.

Your voice is not safe in the AI world either. Here are the latest stories from the US, but I'm pretty sure there are many more around the world:

Two, generative AI tools are responsible for a new category of electioneering and fraud. This month synthetic voices were used to deceive in the New Hampshire primary and Harlem politics. And the Financial Times reported that the technology is increasingly used in scams and bank fraud. — Source

When it comes to images, there have been a ton of war fakes on Twitter or Telegram. Since an increasing number of people don't check the authenticity of images or videos these days, it's likely to escalate. Then, there's porn:

Sexually explicit AI-generated images of Taylor Swift have been circulating on X (formerly Twitter) over the last day in the latest example of the proliferation of AI-generated fake pornography and the challenge of stopping it from spreading. One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal. — Source

I don't know if regulation or people uniting is the answer to all these human-generated problems, but I was curious to know about Nightshade. Nightshade is a tool that helps artists alter the images they upload online. If those images are then used for training image-generating AI, it could cause serious damage to that model:

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. — Source

I think this is incredibly cool and creative, making you feel like you're living in the cyberpunk future you've always read about. The idea that you can “poison” data is fascinating.

Yes, the internet as we know it is fading away. Valuable content is increasingly hidden behind closed doors, whether it's a paid newsletter, a Discord or Slack community, or a simple paywall. Navigating through the maze of content farms and marketing websites is becoming increasingly challenging, and it's scientifically proven that the quality of Google search results is on the decline. This is why more people are turning to Perplexity, which leverages the very same AI models to conduct searches and provide summaries with references.

And this isn't even mentioning the efforts by several governments around the world to segregate their internet spaces from the global internet.

I don't know if the new internet is good or bad thing, but I can definitely say that I miss the old one. You can say it's nostalgia speaking, but today I'm sure that the rise of the machines won't look like it did in Terminator. There won't be a menacing Skynet or some external intelligence orchestrating it all — instead, we'll willingly give away our control.

So, be part of the resistance. Support the human internet.