Discussion about this post

User's avatar
Paul Topping's avatar

I am a computer programmer and involved in AI, though not one of those making LLMs. Your post is a good survey of where we stand and the various ways that the press cover the growth of AI badly. I only have a few comments:

I use "stochastic parrot" to describe the current generation of AI. I still think it is a valid description and, sorry, it isn't how the human brain works to any great extent. We learn the times table by repetition but that doesn't get you far in math class. Think about what things you learn by rote: the alphabet, perhaps a poem or two, the times table. Not much beyond that. That the human brain works a bit like current AI is an invention of the AI community. We started out with artificial neural networks being inspired by biological ones and now they've flipped the script. It's all part of the hype.

The fact that the current AIs are stochastic parrots is exactly why we can't get them aligned with our views on race, etc. and can't get them to be honest without hard-coding it in for specific topics. Most of the advancements to AI are such patches. It makes them better but it just doesn't scale. It is not the way to AGI.

As far as the big question is concerned, whether they will take all our jobs, it is a hard one to answer. In past tech upheavals, they have always been replaced by other jobs. But, as you say, this time may be different. I think the best thing to say is that (a) it hasn't happened yet and may not happen, and (b) until AGI is reached, which is probably not soon, it is doubtful.

Expand full comment
Michael Fuchs's avatar

Yascha, you’re right to dismiss the ostriches. You’re wrong to accept the hype.

It is true that current LLMs produce the illusion of sentience—the so-called Eliza Effect—better than any machines ever have before. It’s hard not to ascribe intelligence to what these models do.

But they are absurdly easy to trick into falling into absurdity. Their fatal flaw—and it can’t be fixed, it’s inherent to their approach—is the lack of a persistent, and accumulating, mapping of tokens into a representation of the world outside. Facts about nature, physical and human, constrain thinking. That is common sense. LLMs have none. The same facts enable the creation of new metaphors. LLMs can find usually proper uses for existing ones, but that’s it.

In software coding specifically, there are serious limits about how large a task LLMs can handle, and their utter inability to address overarching meta issues in professional development about how to balance competing desiderata—like cost, latency, reliability, security, and so on—partly because such decision making requires common sense, knowledge of human nature, a mental model of many things.

But beyond these insuperable problems, lies something even worse. The economics of the thing are impossible.

The old Industrial Revolution would never have taken place if the owners of the looms had been required to liquidate the entire existing economy—burn all the houses to make enough steam to get the first fabric woven.

There is no conceivable business case for these LLMs to become profitable. Investing trillions to get billions will never work. It will get worse the more they scale. This isn’t like software, where incremental sales come at near-zero cost. This is like steel mills, where capacity is all capex. If you always have to sell at a loss, you don’t make it up in volume, as the old joke warns us.

We are allowing hype grifters to wave tulips at us until all of us step off the cliff. The reckoning is coming.

I appreciate your open-mindedness, your breaking with the Luddite impulse in so many. But you need to get better informed.

Here’s a suggestion. Get together for a chat with Gary Marcus, one of the fathers of academic AI. He has a knowledgeable take, neither ostrich nor gobsmacked, but utterly realistic.

Expand full comment
198 more comments...

No posts