The world’s on fire

Part 1: So let’s just make AI porn.

I’ve been working on a project to simplify the whole AI/LLM mess for a while now. It started as a primer to the people I consult for, a simple guide to the available options they can consider and they could plan accordingly.

The scope snowballed into a wiki-style launchpad that lays out all possible options, features, benchmarks, tools and strategies anyone could use to integrate AI technology into a small business.

There still doesn't seem to be any comprehensible service that does this. Currently, we only have vague leaderboards and the media screaming “AI! AI! AI! AI! AI! AI! AI!” as if they have a quota to meet.

I decided to pursue this project because I believe I have a unique perspective to offer and saw an opportunity to leverage it towards future opportunities. In my previous work, I encountered Attention Networks while trying to make sense of the mountains of junk data generated during the BIG DATA and Internet of Things craze.

Every consultant on earth would tell executives that BIG DATA was going to be the future currency and that they should not delete a byte of it.
But those consultants were silent when operations needed answers on exactly how and why.
“Just trust the unproven process.”

Sound familiar?

Long story short, the initial results from our work with TensorFlow were promising, but it was the type of obvious links between related data that you could achieve with any existing tools and rules. However, it all quickly fell apart when we attempted to uncover “deep insights” – links that are not apparent to people or regular tools.

This technology, on its own, was not in the right space at the time to consistently provide value.
I believe the problem with data models has nothing to do with compute power and data quantity. In fact, it's actually the type of problem that would likely be made worse by brute forcing excess data and compute into the process.

It's the eternal problem of defining “good” data.

Complete data is not necessarily perfect data. Having all the data in the world does not guarantee perfect data. There is no such thing as perfect data or a perfect model.

In order to have perfect data, you need perfect processes in a perfect operating environment. Anyone who has been in an operations role, whether in the military, medical field, or as merchants, will all agree on one thing: the real world doesn't care about rules and expectations. All the real world does is change and ruin your rules and expectations.

Unlike the unpredictable nature of the real world, transformer models can’t change.

They are built on static foundations and rely on three risky assumptions:

The assumption that useful information in the system significantly outweighs misinformation. That is very difficult to judge as an uninformed user.

The assumption that you are asking correct questions, bearing in mind that bad information leads to bad questions – and that results in bad information which ultimately leads to poor decisions.

The assumption that changing metrics and decision-making does not inspire a change in behavior. This is especially pertinent given the fact that these systems allow decision-making without accountability.

What changed?

I've been out of the tech industry for a while, and I was excited to see what major breakthrough had happened to make language-based transformer models the foundation of AI.

The whole idea of AI looking like this seemed... counter-intuitive. Because I always considered neural networks and machine learning to be more illustrative and less a definition, but chatGPT seemed to pop out of nowhere.

Boasting actual INTELLIGENCE.

As skeptical as I was, these were the people who had the infinite resources and geniuses to throw at the problem. Judging by the names and dollar amounts associated with the project, the teams’ prior work and rave reviews from educated professionals, it did feel as if there was something real going on.

You can imagine my surprise when I saw that the solution was a combination of everything that got me out of the tech industry.
Startup bullshit.
“Fake it till you IPO.”
It didn't help that one of the places where I attempted to use data modeling in this way was with a startup that appeared to be more interested in the aesthetics of the prototype dashboard than the accuracy of the information it presented.

The massive breakthrough solution to overcome the AI barrier didn't appear to be any more advanced than: Ctrl A on the entire internet, copy, paste and make a giant model. (This is obviously a facetious statement. The rational and overly technical thoughts can be found here.)

It didn’t take much capability testing of LLMs for me to spiral into a state of manic conspiracy nut, questioning everything in this industry.

Nothing about it felt right. All the grand promises, the irrational ideas of intelligence and claims of IP theft, along with the fact that they were charging money for this. It’s all distasteful on their own and worthy of scrutiny. However, one of the sticking points for me was the fact that it didn't really work.

It “could” work... occasionally. Sometimes even impressively. But it did fail catastrophically far too often. And it was often sneaky failures too – able to pass muster and often requiring a forensic breakdown to identify.

It was somehow also the user’s fault for these errors emerging. For not “prompting” correctly. For not burning enough tokens. For not paying for the better model or hardware. For not blindly defending the company and shilling for their software to be adopted and improved. For not understanding the inner workings of a machine without proper enablement, documentation or honest communication.

This is the future big tech seems to want. It made sense after the last twelve years of runaway enshitiflation: Maximum profit, minimum effort, zero accountability. To see it so plainly paraded as an aspirational future made me want to dig into the specifics and to try to communicate them. My scope for this project once again shifted to demystifying what is happening. That neo-liberal fantasy of “just giving people the right information” and trusting that society will magically make the right choices. Trust that only the most rational ideas will emerge in the free market of ideas.

Ideas like the Metaverse. The Blockchain. Windows Vista. Google Barge.

My plan was to avoid overly technical details, because regular people simply don’t care. At no point did I want to mention some abstract future outcome of Artificial Super-Intelligence or doomsday events or a utopia where everything is rosy.
That could happen next month or in 50 years – but people have bills to pay today. I just wanted to present the simplified facts as they have been observed:
people, tools, quotes, promises, wins, and failures.

There’s a massive issue that comes from dealing in facts.
Facts require effort. A lot of effort. Effort to investigate claims and map events and follow an insane quote to a single line in a four-hour stream-of-bullshit podcast and track down sources and survey real user experiences and read papers and just test everything.

The best part is that the crap keeps flowing and it's impossible to keep up. During the time that it took me to define the AI leaderboards and benchmarks and explain why they hardly matter, a dozen new major claims and four new services emerged. There were also a few thousand layoffs, and a company changed their internal LLM use policy no less than 20 times.

In the time that it took to define a failure of the model (such as a random word test or count to a million), a new model was introduced, followed by a roll-back, multiple updates and apologies.

To top it off, the deeper you looked, the worse it got. It fueled a growing dread that there's no point to any of this. We all know that logic and reason never win out when people are paid to be unreasonable and peddle nonsense. Or when people are just too desperate and exhausted to put any more effort into rationalizing a bad situation.

You can see it in the way most skeptics of this industry are treated, and you could argue that this is just the way things are. That my critiques may be deemed invalid, because this field is just moving too fast and destined for success.

But what if it isn't? What if the cost of failure is far worse than the cost of not participating?

It's the type of questions people didn’t seem willing to entertain. You can dish out every possible fact or informed opinion or financial projection, only to have it fall on deaf ears.
You can spell out the illegality and hypocrisy, but it won't matter to the people who need to hear it. You're just bringing down the “good vibes”.

The line is going up. Money becomes more money. Everything's good. We’ve just created 20 billion dollars in the last two sentences.

Ignore the uncontrolled truck of TNT flying towards the stack of red barrels with fire symbols on them.

I don't understand vibes.

After months of spinning tires, I needed to admit that it isn’t possible to complete this side project with the few scattered hours I can spare each month.
That left me with a choice: I could throw myself into it completely, let it consume all my free time and become even more of a delight to be around, or I could set it aside and unplug from the whole thing. Let the bubble fizzle out, explode, or inflate to infinity, and be happy watching other people get paid to make it happen.

Suffice it to say, I got to have a few very peaceful weeks. It was better for me, my mental health and my family.

Then I checked again and discovered that openAI had decided to do porn.

Not mental health guardrails. Not reliable methods for detecting and preventing hallucinations. Not an SMME (small, medium, micro enterprises) toolkit with industry-specific workflows and official plugins for common software. Not a clear service-level agreement for smaller users, setting quality assurance standards to work against, or long-term price assurances around which one can make strategic business decisions. Not any real value generator for their business or customers.

Porn.

I don't really care anymore.

Not in the “I don’t care anymore and will expend thousands of words telling you exactly why and how much I don’t care” way. This is so far beyond reason now that I don't care about being reasonable or coherent. I'm going to embrace my ultimate fate and become that old man yelling at clouds to get off his lawn because I care.
What I do care about is the slow death of good technology and tech literacy, the fact that people are hurting because of this mess, and that it’s going to be regular people like you and me and our children who are almost certainly going to have to deal with the consequences.

I need to make it clear that I don't really care about the adult content industry. I don’t judge most of the people who work in that field. My distaste here is specific to LLM companies making these types of business decisions.

At the time, I was sure the porn thing should be bigger news. You can imagine my frustration when most people didn't even have the energy to give it a second thought, and those who could spare a single thought only did so as a halfhearted obligation.
Admittedly, it was small news compared to the inbreeding circle-jerk mess of leveraged financing that openAI seems to be cooking up with everyone. It was also announced alongside the release of yet another unwieldy LLM-based browser and a week before earnings calls.

Look at it this way: the company that commands a significant share of the global wealth allocation (i.e., a considerable percentage of most people’s retirement funds) and has access to the most advanced technology they say is known to humans, is now turning to the world’s oldest profession. With so much money and so many people’s lives at stake, it blows my mind that there isn't a peep about it in any tech or finance media.

Now, I’m hardly the most qualified person to speak on the world’s oldest profession. After all, I am a gamer. I just have a funny feeling…

A feeling that openAI is not doing this in the name of promoting a healthy sex-positive culture that aims to produce ethical adult content, where creators are fairly compensated and consumers are encouraged to develop realistic expectations and appropriate sexual habits, where the art could play a part in couples fostering more fulfilling relationships.

It’s just a hunch.

But what if they are in financial trouble and are just doing porn for the money?

Do people in financially stable positions resort to this sort of business?

People drop out of med school, PhD courses, and even lucrative corporate jobs to pursue a career like this, either because they need the money, or because they believe that this is the only available path to independence.
It's likely that a lot more people are making this decision nowadays, given the current state of everything. Very few people are jumping into socially distasteful work if they happen to be rolling in cash. I am willing to bet on it.

And AI companies are rolling in money right?
It inspires so much confidence when the CEO is asked about revenue, and their answer is all about what the company “will do” and not what “is happening”. It's perfectly normal for healthy companies to talk about how they definitely don't want a bailout.

It can't be overstated, so I will say it again. The leading companies of this new industry are not sustainable by selling their products and services in spite of all the money, tech, access, connections, hype and investor patience at their disposal.

Even NVIDIA won’t be able to sustain chip demand. The useful life of these things has been creeping up due to underutilization, and I can't imagine that a thriving second-hand market exists.

The main focus at the moment is on fluffing up the stock market by blowing life into stagnant old companies with insane deals that seem to be valuated through the complex forecasting equation: rolling a handful of dice and multiply by a billion.

They also seem intent on retail advertising, because nothing says innovation more than providing the same bad solution of the last decade. They also love getting in bed with rich or powerful sugar daddies, all while encouraging parasocial fan relationships where irrationally attached people can't help but throw money at them.

Now they want to sell sexual fantasies.

A more important question stemming from the above assessment is this: Do tech CEOs just want to be performers in the porn industry? If so, it would have saved the world a lot of pain to instead allow the super-rich to be their true selves and hook them up with an OnlyFans deal.

It's a line of work they actually have an aptitude for: Being chronically online, all the crypto scams, selling themselves to the highest bidder, and constantly shilling crap that no one wants. Hell, Elon Musk was faking his Path of Exile 2 game-play to act like “one of the guys”.

It's hard not to see this situation as if they are a club of pick-me camgirls (I am not going to define that for everyone's sake).

And this sort of attitude from the industry leaders is reflected in the sorry state of their products.

Chapter bridge:

This is already getting pretty long winded.
Will keep this crazy train going in a few days with:

The life of AI (Before they fell off and needed to go into porn to pay the bills)