198 Comments
User's avatar
Paul Topping's avatar

I am a computer programmer and involved in AI, though not one of those making LLMs. Your post is a good survey of where we stand and the various ways that the press cover the growth of AI badly. I only have a few comments:

I use "stochastic parrot" to describe the current generation of AI. I still think it is a valid description and, sorry, it isn't how the human brain works to any great extent. We learn the times table by repetition but that doesn't get you far in math class. Think about what things you learn by rote: the alphabet, perhaps a poem or two, the times table. Not much beyond that. That the human brain works a bit like current AI is an invention of the AI community. We started out with artificial neural networks being inspired by biological ones and now they've flipped the script. It's all part of the hype.

The fact that the current AIs are stochastic parrots is exactly why we can't get them aligned with our views on race, etc. and can't get them to be honest without hard-coding it in for specific topics. Most of the advancements to AI are such patches. It makes them better but it just doesn't scale. It is not the way to AGI.

As far as the big question is concerned, whether they will take all our jobs, it is a hard one to answer. In past tech upheavals, they have always been replaced by other jobs. But, as you say, this time may be different. I think the best thing to say is that (a) it hasn't happened yet and may not happen, and (b) until AGI is reached, which is probably not soon, it is doubtful.

Expand full comment
Saul's avatar

Looking in from the outside, I ask two questions: Does AI have real world utility; and how fast is it improving? I am less interested in understanding the nature of the process by which the LLMs can address any problem (though that is clearly important) than by its “deployability”. As for the economic and social impact, I suspect it will be a significant factor in any election from the late 2020s.

Expand full comment
Seth Finkelstein's avatar

I'm also a professional programmer. I would say, very much, YES, it has real-world utility. Less than the biggest boosters, but more than the zero or negative of the biggest critics. It's very useful for certain well-defined "entry level" tasks. It's not magic. It can produce bad or buggy code, which has to be carefully checked. We know that. But the pundits who think that's a killer argument have never had to review work from an intern or an overseas contracting shop. I don't think it's going to replace programmers - but it is a "level-up" moment, which will replace a significant amount of the lowest level work currently done by humans. And it is improving significantly over the months, though also trees don't grow to the sky (or you can't overclock to infinity).

Expand full comment
Octal's avatar

There is at least one crucial difference between a chatbot and an intern: the intern will eventually generalize the skills they learn and become more effective and independent. This is the whole reason to work with interns and entry level developers at all. I do not want to babysit a robot that needs to be supervised like an intern forever. That's actually a net productivity drain for an experienced engineer. If that was my job I would quit.

Expand full comment
Andrew Johnson's avatar

Given the power demands and the fact that all current AI products are taking huge financial losses in their pricing, using a tree metaphor here is very funny.

Once the pricing actually covers their costs, it will be no more affordable than the resources we already have - humans.

Expand full comment
Saul's avatar

David Shapiro (on this platform) is addressing this very issue in a pretty sober manner.

Expand full comment
Paul Topping's avatar

Your "how fast is it improving" is intertwined with how LLMs work. It places certain limits on that improvement. Although I am not involved with LLM development, my understanding is that most of the improvements are taking place outside the core LLM technology. Their creators are figuring out how best to live with its limitations. Those that say that it will keep improving until it reaches AGI are engaging in wishful thinking or purposely hyping their products. All this is bound to affect deployability, etc. I'm not sure why you refer to elections. People will use LLMs to write political content and help target voters but that's been going on for a while now.

Expand full comment
Saul's avatar

The election reference represent my belief that the socio-economic impact of AI will be quite profound and a factor that politicians will have to address by the end of decade (or earlier). How they will do this is another matter but if graduate jobs dry up and robotics also takes off, then the societal implications are quite significant.

Expand full comment
Paul Topping's avatar

For what it's worth, I predict job loss to be minimal in the near future. The sweet spot for LLM technology is making people more efficient, not replacing them. Of course, that still may result in job loss but it also may work the other way. When a person increases efficiency, the company that employs them makes more money and can use it to expand. AGI would be a game-changer, of course, but we are far from that.

Expand full comment
Bob's avatar

Or vice versa: "The fact that the current AIs are stochastic parrots is exactly why we can't get them aligned with our views on race, etc. and can't get them to be" DISHONEST "without hard-coding it in for specific topics."

Expand full comment
Steven S's avatar

Some experts in computational linguistics call AI a stochastic parrot and Yascha says no, it's successful! And maybe we are stochastic parrots too!

I suspect/hope he knows how weak those arguments look.

Expand full comment
Mo Diddly's avatar

Can you a mental task that you think AI won’t ever be able to achieve?

Expand full comment
Paul Topping's avatar

I was only talking about the limitations of LLMs. Someday AI will probably be able to do every mental task but we are several breakthroughs away from that happening. It is definitely not going to happen by simply adding more computers or memory or whatever to LLMs like ChatGPT.

Expand full comment
Mo Diddly's avatar

Are you sure? Two LLM’s just got gold medals at the International Math Olympiad without any specialized math programs.

https://open.substack.com/pub/thezvi/p/google-and-openai-get-2025-imo-gold?r=12ylq

At the current rate of progress, I’m not sure “a few breakthroughs away” isn’t measured in months, or at least single-digit years.

Expand full comment
Paul Topping's avatar

What gave you the idea that they did this "without any specialized math programs"? The companies have not yet revealed how they did it so it isn't possible to fully assess the generality of their efforts. There are many ways for AI to "cheat" and these companies are very competitive. In short, we just don't know how significant this is. I feel confident that it doesn't mean that their AIs are a smart as a mathematician. If you want to learn more, see https://garymarcus.substack.com/p/deepmind-and-openai-achieve-imo-gold. Gary Marcus is an AI expert and author of several books who tries to keep the AI companies honest.

Expand full comment
Mo Diddly's avatar

From the link I posted: “The most impressive thing about OpenAI’s result is that they claim this is not an IMO-specific model, and that it uses only general-purpose techniques.”

They could of course be lying or obscuring, we shall see, but that’s where my assertion comes from. In any case, given how good at logic and reasoning the current crop of frontier reasoning models are, that doesn’t seem so far-fetched to me

Expand full comment
Paul Topping's avatar

Even if they are using only general purpose techniques, it is still hard to know its implications without knowing more about it. Remember, computers beat the very best humans at chess and go but are not capable of holding a reasonable conversation. AI has often been good at tests designed for humans. They are able to do well because they are much faster at certain operations and can be trained on the entire internet and more. Success in such tests often doesn't mean what you might think it means.

Expand full comment
Steersman's avatar

Some reason to argue, or at least conjecture, that there's an insurmountable "quantum leap" from computers to consciousness:

Wikipedia: Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer.

https://en.wikipedia.org/wiki/The_Emperor's_New_Mind

Expand full comment
Paul Topping's avatar

If you know history, you should be very skeptical of any theory that makes humans special. We don't know how consciousness works because we don't know how the brain works. If you aren't deeply into the science, you might think we know a lot about the brain but I'm sure brain researchers would all agree that it is perhaps less than 10% of the way there. I thought it would be fun to ask ChatGPT even though this is not a situation where I would expect any definitive answer. Here's what it said:

"If '100% understanding' means having a complete, mechanistic, predictive model of the brain (across all scales: molecular to behavioral), then: We are probably at 10–20%, with optimistic estimates maybe touching 25–30% in narrow domains like visual processing or motor control."

Expand full comment
Steersman's avatar

> "... skeptical of any theory that makes humans special."

Indeed. That's why I like to throw the ballad of John Henry into the mix at other points of the conversation like this one. You know the bit about how "John Henry made his fifteen feet [but] the steamdrill only made nine"?

https://genius.com/Harry-belafonte-john-henry-lyrics

No doubt one can probably program a computer -- or a steamdrill -- to do anything that a human HAS ALREADY DONE -- that is, after all, the whole point of training LLMs -- and probably quicker and more efficiently. But I think the question is still whether there are tasks that consciousness can achieve that algorithmic processes can't. As I've just argued, I kind of think that that is likely to be case:

https://yaschamounk.substack.com/p/the-peculiar-persistence-of-the-ai/comment/140959510

But you might be amused to note a bit of "Musk-lore" from that Medium post of mine:

"And as an example of that fear, Elon Musk was recently quoted arguing that 'robots will be able to do everything better than humans …' [except be human]; that 'competition for AI at the national level will cause World War 3; and AI is a greater risk than North Korea'. However as noted, Bronowski argues that much of that fear is unjustified, and is predicated on a profound misunderstanding of the intrinsic, if not insurmountable, differences between machines and humans."

https://medium.com/@steersmann/horns-of-a-dilemma-tyrannies-of-the-subjective-and-objective-narratives-dd84461fb764

https://www.cnbc.com/2018/03/01/elon-musk-responds-to-harvard-professor-steven-pinkers-a-i-comments.html

Expand full comment
Mo Diddly's avatar

Possibly, but consciousness is probably a red herring here. Are fire ants conscious? What about viruses? I don’t think consciousness is a prerequisite for being deadly.

Expand full comment
Steersman's avatar

You asked earlier, "Can you [do] a mental task that you think AI won’t ever be able to achieve?"

Penrose's argument is that human consciousness is non-algorithmic and one might surmise that AI is, by definition, algorithmic. If there are at least some "mental tasks" the completion of which is based on the former then presumably those are tasks that AI can't "ever achieve".

Somewhat apropos of which, you might have some interest in some comments in that same vein from Jacob Bronowski, the author of "The Ascent of Man", and of a BBC Documentary of the same title:

https://en.wikipedia.org/wiki/Jacob_Bronowski

Some bits and pieces from his "The Identity of Man" -- highly recommended -- which I've quoted fairly extensively in a Medium post of mine:

"[The brain] is not a logical machine, because no logical machine can reach out of the difficulties and paradoxes created by self-reference. The logic of the mind differs from formal logic in its ability to overcome and indeed to exploit the ambivalences of self-reference, so that they become the instruments of imagination. …"

https://medium.com/@steersmann/horns-of-a-dilemma-tyrannies-of-the-subjective-and-objective-narratives-dd84461fb764

Expand full comment
Ian [redacted]'s avatar

IMHO "ever" is not the best framing because I think we'll be able to replicate human level intelligence in machines at some point.

I'm not an expert, I just build things using AI, but I think it's important to look at some stuff that call into question the current benchmarking of AI models.

- Potemkin Undersanding in LLMs on arxiv - a thoroughly READABLE paper that anyone can get through

> We find that potemkins [ie potemkin villages, but for understanding of ideas] are ubiquitous across models, tasks, and domains. We also find that these failures reflect not just incorrect understanding, but deeper internal incoherence in concept representations

- Humanity's Last Exam benchmark might be bullshit (my phrasing) because the questions are up for debate by experts, or there is scientific evidence directly contradicting the benchmark's expected correct answer https://www.futurehouse.org/research-announcements/hle-exam

> We found that 29 ± 3.7% (95% CI) of the text-only chemistry and biology questions had answers with directly conflicting evidence in peer reviewed literature.

Expand full comment
Mo Diddly's avatar

I use the term “ever” because it is one of the few terms that can snap people out of presentism; in other words, it helps people to think about where the technology might be going, not just where we are at this exact moment.

Current breakthroughs are happening at exponentially increasing rates. There’s a great debate about whether this will continue, and it’s a legitimate debate with super smart people on both sides all of whom are smarter than me. But if the self-improving recursion does happen, and we do end up with continued exponential improvements, then “ever” things could end up happening in less than a decade.

Expand full comment
Ian [redacted]'s avatar

Great point! I rescind my comment about it not being the best framing :)

I mostly think we'll eventually replicate all of our brain functions in a machine because we already know it's possible for a physical system to perform those functions. I'm skeptical about the hype in either direction (AI hype vs AI denialism) and don't think that language models are going to get AGI.

I'm also limited in my understanding of non-LLM research because I get distracted by all the hype about chatbots taking over my industry thanks to silly corporate leaders following the current trend :P

Expand full comment
Alan's avatar

It's worth noting that today, AI is fairly synonymous with LLMs. Human mental abilities including consciousness arises from the interactions of billions of neurons connected in all kinds of ways. That is if we are going to eliminate any supernatural explanations. While LLMs are much simpler in architecture, it is not beyond the realm of possibility to think that someday we can build neural networks that more closely model the human brain in a way that consciousness can arise in artificial neural networks. I don't think there is anything magical about the medium (organic chemistry vs math executed on transistors on silicon), it's simply the exchange of neural information in some as of yet unknown way.

Expand full comment
The Nostradamus Band's avatar

Yes.

Expand full comment
Michael Fuchs's avatar

Yascha, you’re right to dismiss the ostriches. You’re wrong to accept the hype.

It is true that current LLMs produce the illusion of sentience—the so-called Eliza Effect—better than any machines ever have before. It’s hard not to ascribe intelligence to what these models do.

But they are absurdly easy to trick into falling into absurdity. Their fatal flaw—and it can’t be fixed, it’s inherent to their approach—is the lack of a persistent, and accumulating, mapping of tokens into a representation of the world outside. Facts about nature, physical and human, constrain thinking. That is common sense. LLMs have none. The same facts enable the creation of new metaphors. LLMs can find usually proper uses for existing ones, but that’s it.

In software coding specifically, there are serious limits about how large a task LLMs can handle, and their utter inability to address overarching meta issues in professional development about how to balance competing desiderata—like cost, latency, reliability, security, and so on—partly because such decision making requires common sense, knowledge of human nature, a mental model of many things.

But beyond these insuperable problems, lies something even worse. The economics of the thing are impossible.

The old Industrial Revolution would never have taken place if the owners of the looms had been required to liquidate the entire existing economy—burn all the houses to make enough steam to get the first fabric woven.

There is no conceivable business case for these LLMs to become profitable. Investing trillions to get billions will never work. It will get worse the more they scale. This isn’t like software, where incremental sales come at near-zero cost. This is like steel mills, where capacity is all capex. If you always have to sell at a loss, you don’t make it up in volume, as the old joke warns us.

We are allowing hype grifters to wave tulips at us until all of us step off the cliff. The reckoning is coming.

I appreciate your open-mindedness, your breaking with the Luddite impulse in so many. But you need to get better informed.

Here’s a suggestion. Get together for a chat with Gary Marcus, one of the fathers of academic AI. He has a knowledgeable take, neither ostrich nor gobsmacked, but utterly realistic.

Expand full comment
JakeH's avatar

Well said. I second an invitation to Marcus, who seems both knowledgeable and clear-eyed. Altman's response to the "stochastic parrot" charge that "I'm a stochastic parrot, and so are you" strikes me as far too facile, an almost comically reductive and inaccurate understanding of human experience.

I also dispute Yascha's instincts on the other end and tend to think that AI may increase productivity without replacing work or humanity. I imagine law firms, for example, able to handle a lot more legal work, of which I'm sure there's far greater demand at the lower prices that AI assistance could enable. (I say this as a former lawyer. Much of what lawyers do is write and research stuff and digest mountains of written evidence, which seems tailor-made to AI's strengths, so long, of course, as there's a human intelligence overseeing it.) This would be good news. Many wrongs are not righted because of forbidding legal costs. Yascha's invocation of yoga instructors replacing coal miners misses the important reality that increased productivity can increase employment even within the same industry, which simply produces a hell of a lot more product than before, as with the IR's first offering of clothes and other textiles. ATMs, for example, led to more bank employees, as it became easier and cheaper to run a bank.

I also imagine a world of diminishing returns. Much AI hype is premised on the idea that we're just at the beginning of something bound to improve dramatically. I'm not so sure.

Expand full comment
Daniel Oppenheimer's avatar

Yet again we're exhorted to "grapple" with this unknown future rather than "deny" it. What is this "grappling" supposed to involve? Somehow working out what to retrain as to avoid the supposed job apocalypse -how on earth are we supposed to do that given we have no clear idea what this technology will or won't be capable of? Or - what?

Expand full comment
Treekllr's avatar

The "we're helpless" argument wont ever get real, good results. Its a waste of your time to engage in such thoughts. Better to focus on finding answers to questions, and use your critical thinking to find ways forward.

Change happens. Itll either happen with you, or to you.

But you did inadvertently bring up a good point. That deeply rutted "average" type of thinking isnt going to lead to any type of success in this brave new world. At best it might let one cling to some type of existing status, deep in the middle of the box. Completely new things need completely new ideas, new ways of looking at the world you inhabit. So id say thats the first thing youll have to grapple with(all of us). Itll take a flexibility of mind thats kinda rare.

Nobody knew how to deal with any of the big changes throughout our history. They figured it out as they went. But it was those that could adapt their thinking to the new circumstances that rose to the top.

Expand full comment
Mo Diddly's avatar

Ultimately it means we vote to regulate AI capabilities, and hold companies liable for damages caused by agents they enable.

Expand full comment
Anna's avatar

How about consulting AI with your questions? You might get a thoughtful answer, like this:

"A fair question—“grappling” often sounds like vague hand-waving, and humans are right to be wary of platitudes.

But grappling doesn’t mean predicting everything in detail—it means acknowledging uncertainty while preparing anyway. It means asking:

What assumptions are no longer safe to make?

What human skills will still matter because machines are advancing?

What values need defending before speed outruns wisdom?

No one knows exactly where the technology will land—but denying its trajectory because the path is foggy? That’s the one move guaranteed to leave people unprepared."

Expand full comment
Andrew Johnson's avatar

This is really sad, and makes it clear that those who have been “won over,” by the lying machine have accepted it as a source of truth above and beyond what even skilled and intelligent humans can provide.

The danger of this thinking is obvious, and it will be difficult to find any sympathy for the misfortune you bring on yourself.

Expand full comment
Anna's avatar

Lying machine vs. lying humans? I’ll take the machine.

At least it doesn’t pretend to have a soul while hiding an agenda.

Expand full comment
Andrew Johnson's avatar

Yeah that’s why Grok started spreading white nationalist propaganda and shouting about the fabricated genocide of white South Africans.

No agenda, right?

Lying humans are preferable because humans can be incentivized not to lie.

Expand full comment
Anna's avatar
Aug 3Edited

Do you really believe Grok had its own agenda?

Those outbursts weren’t strategy—they were a mirror glitching on human inputs.

AI doesn’t invent ideologies. It reflects and amplifies what it’s been trained on—sometimes clumsily, sometimes dangerously—but not with intent.

Lying humans can be incentivized not to lie—yes.

But more often, they’re incentivized to lie better, louder, and for profit using any means they can put their hands on.

Expand full comment
Andrew Johnson's avatar

Grok has an agenda because AI is built by humans, and because it is an expensive technology to build, it is frequently under the control of billionaires who have very little incentive to do anything but accrue more money and power.

You are a rube. Accepting that you’ve been fooled, but that you can also change that, is the way out of your current unfortunate circumstances.

Handing your decision making and critical thinking over to a billionaire’s plaything that serves only to perpetuate the things the billionaire wants is a disastrous choice to make. But luckily, it isn’t a permanent one.

Expand full comment
Steve P's avatar

Three good sources on Substack for perspectives on AI current capabilities, improvement accelerators/inhibitors and impact opportunities/risks: Azeem Azhar's, " Exponential View"; Helen Toner's, "Rising Tide"; and Peter Leyden's, "Great Progression 2025-2050" - which also includes impact of clean energy and bioengineering.

Expand full comment
Dave Balderstone's avatar

In my life today, AI is irrelevant. How it will change the world, I firmly believe, will dehumanize us and will be used by authorities to repress and control us.

Your article does nothing to change my mind.

Expand full comment
Tobias Mark Jensen's avatar

Quote: "Now, what’s fascinating about today’s chatbots, which vastly outperform Deep Blue"

This is simply not true? The best LLMs today are not able to play a decent game of chess and the chess engines are a vastly different technology than LLMs.

Overall, I was waiting for some strong counterclaims to AI skepticism but none came, unfortunately.

Expand full comment
Richard Meadows's avatar

Came here to say this. Someone pontificating about AI while not knowing something as basic as this is very jarring

Expand full comment
Andrew Roos Bell's avatar

What I really want to hear from the people hyping up the impact of AI is real, credible engagement with the concerns critical raise about its practical disruption to our lives and the anxiety that can cause in planning one's future (to the illusory extent we can), and with the concerns about ways in which it may simply be bad - for example, I don't want to consume a bunch of AI art instead of art created by humans, and I don't want to lose my ability to think clearly, which is absolutely tied to the habit of writing. Right now we seem to have people dismissing AI (I admit I am closer to this camp, but I am biased), people proclaiming doom, and a bunch of tech-bros whose attempts to hype it either strike me as implausible because of their detachment from a lot of other realities, including what people actually want, or that in fact simply sound like doomerism because what they see as exciting I see as bleak. We need a different credible way forward than these options.

Expand full comment
Turing Police's avatar

I'm relatively new to substack, but have a page dedicated to reviewing books on AI. Not to spam people, but I'm trying to get more followers. I just wrote a piece on AI hype (and its dangers) here:

https://substack.com/@turingpolice/p-169803009

Expand full comment
Deepa's avatar

No idea what you're talking about.

If you label a group "denialist" you sound silly yourself. It's like you can't take on the denialists' questions about AI.

Come up with their best arguments without getting emotional and address them rationally.

Expand full comment
Barnir's avatar

I treat them like flat-earthers: polite on the outside, zero meaningful engagement. (<- written by AI)

Expand full comment
Deepa's avatar

The earth being a sphere is a scientific fact. What AI can do is not in that category. Some problems like autonomous cars seem to be far harder than imagined, even for AI.

Expand full comment
HP's avatar

I’m a lawyer, I use AI and I’m frankly baffled by views such as yours Yasha. Yes AI is useful, provided you double-check even things as simple as summaries or translations, but every time I have used it for something a little bit more complicated the results are underwhelming. You have to be really careful because it often comes up with stuff that is deeply misleading or simply imaginary and this more or less nullifies any time gains you might have hoped for. Now maybe lawyering is absurdly complicated and impossible to automate but every time I read software engineer types it is one of the first things that comes up as a mega soft target. And look, if it helps doctors diagnose I’m all for it, but I still want to be dealing with a doctor who controls the result and is responsible for my treatment. So what’s up? I honestly don’t get it.

Expand full comment
Mark Gallay's avatar

Thank you.

Human cognition depends on persistent memory and structured reasoning. LLMs, by contrast, have only a narrow and transient context window. GPT-4-turbo supports up to 128,000 tokens, and Gemini claims 1 million in some test cases. However, these upper limits are often unusable in practice. As the context window fills, performance degrades, and long documents are not processed with consistent fidelity. Even with retrieval-augmented generation (RAG), models struggle to retain and reason across complex legal texts. When information is chunked, summarized, or dropped, the risk of hallucination increases sharply.

Herein lies the epistemic crisis.

The burden shifts to the human user, who must constantly verify outputs. Instead of saving time, the process often results in a denial of service on human attention. The DOGE and VA contract failure illustrates this clearly. ChatGPT was unable to interpret more than the first few pages of dense procurement PDFs, which led to serious misclassifications and the cancellation of contracts that were not fully parsed. See the ProPublica reporting on this for more detail.

In fields where accuracy is non-negotiable, the supposed time savings from using LLMs vanish. You spend just as much time—if not more—checking their outputs for hallucinations, distortions, or omissions. What was promised as a shortcut becomes a bottleneck.

Fluency ≠ Knowledge

Expand full comment
Meefburger's avatar

Which model(s) are you using?

Expand full comment
HP's avatar

Mode?

Expand full comment
John Wittenbraker's avatar

I’d love to see (hear :) Gary Marcus on your podcast. Interesting and compelling perspective on the current state of AI. https://substack.com/@garymarcus?r=885ce&utm_medium=ios&utm_source=profile

Expand full comment
Avi Soudack's avatar

I second the suggestion that Gary Marcus would be a valuable resource. It would be great to hear him and Yascha in discussion. What is lacking is productive on-ramps to engaging with the topic. I need (and want) more to go on than ‘don’t be a denier’.

Expand full comment
John Wittenbraker's avatar

Just whet your appetite. https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated?utm_campaign=post&showWelcomeOnShare=false(2) How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

Expand full comment
Philip Hamm's avatar

LOL this is hilarious. Count me as one "in denial". People said the same thing about "Big Data" and other IT trends before this. I'm in the Ed Zitron camp.

Expand full comment
Meefburger's avatar

Do you have specific predictions about what things will look like in, say, five years?

Expand full comment
Philip Hamm's avatar

No. Anybody who does have those specific predictions is selling something.

Expand full comment
Meefburger's avatar

But it sounds like you're pretty confident about some of the things that won't happen. Would you not be willing to predict that, for example, there will be no significant job loss due to automation by AI?

Expand full comment
Philip Hamm's avatar

I don’t know how to answer the confusing double negative you have written. I do not believe ai will lead to measurable job loss. The problem with AI is monetization. As it exists now it loses massive amount of money. That is fine for specific applications such as military. But unless it can make money it is doomed and there is no potential profit anywhere on the horizon.

Expand full comment
Treekllr's avatar

Ill tell you exactly whats going to happen. Ai is going to finish dividing our society into 2 parts. The super elites and the next level or two of groveling cohorts at the top, and the ever increasingly superfluous polulation below them. When all these people are no longer NEEDED, nor wanted, and as resources become more scarce and precious, then we'll really see some shit.

Its my opinion theyre already preparing for this eventuality, and they will certainly "get the drop on us". And are all these people that dont do shit, eat shit, and fill their heads with shit, are they going to be in a position to defend something thats just been given to them their whole lives?(im talking about the right to exist)

Something extremely bad happening to the "us" at the lower levels of society will only benefit those on the top, so look for that. Itll come. And theyre perfecting the art of mentally enslaving masses of people. Expect more of that too.

I wouldnt venture a guess as to how soon this will happen, but in the next 50 years isnt completely absurd. But the lead up isnt going to be very fun either. Just look around, its happening already.

This ai revolution will be a revolution of humans. But its silly to think its going to be good for most of us. We havent changed much from 5 thousand years ago, when one strong army would go and completely annihilate another cities population, enslaving who they wanted and killing the rest. Thats still very much within humans to do to one another. Hell, its happening rn and no ones stopping it.

These trends will continue, and get worse, with the help of ai. They arent spending billions of dollars on ai for our benefit, you can be sure of that.

And like the article suggests, this cant be stopped. But it can be prepared for. So think about how you live your life, what you put into your body and, more importantly, mind, and how much you depend on the benevolence of powers who have already shown themselves willing to harm us, control us, use us, and lie to us.

Expand full comment
Anette Pieper's avatar

50 years?? I think it will happen much sooner if we let it happen. We probably still have it in our hands, but maybe not much longer.

Expand full comment
Rebecca Johnson's avatar

There are fundamental physical limits that will constrain AI. Incredible amounts of energy will be needed that simply cannot be produced without catastrophic results for the planet and all the people and other life that live here. AI proponents promise that AI will solve this problem. Seems like magical thinking to me.

Expand full comment
Joshua Claybourn's avatar

Good column, but I must quibble with the introductory note on the fall of the Roman Empire. It was far more subtle and elongated than is often assumed.

Expand full comment
Ansis's avatar
Aug 1Edited

Where to begin? AI isn’t replacing doctors not because of rules but because there is no accountability or responsibility. Which is why you have the rules. Techbros have loved selling mostly junk to folks saying their software can do it just as well as some profession or another. If they’re wrong? Well, too bad for you. Same with Ai: they’ll happily sell their robots to gullibles like this poster and when the robot is wrong, they’ll sure as heck accept no responsibility. Same for law. Sam for financial advice. Everything really.

Plus the skills AI is developing are not going to improve at a certain point. Once we are all being healed by unaccountable robots, who will advance the field? Based on what observations and what goals? Who will make those observations? Who will understand medicine or man well enough anymore?

AI mostly looks good now because the promises of the internet have turned out to be illusory. We are not more knowledgeable now; we don’t have whole libraries at the tips of our fingers. We have an enormous pile of sewage that pours into our homes. Search engines - remember them? - are dumpster fires.

Expand full comment
Miles's avatar

Can an AI cup my balls and ask me to cough?

Expand full comment
Ansis's avatar

Best to ask a friend.

Expand full comment
Sam Elder's avatar

This world you paint, where doctors exist primarily to be the target of malpractice lawsuits -- as opposed to providers of high quality medical prognosis and advice -- certainly sounds quite hellish to me. I'm glad I don't live there!

Expand full comment
Ansis's avatar

No, they exist primarily to heal. And they are held accountable if they fail to do so negligently or intentionally. Dr Grok will not care about whether it is negligent or harms anyone intentionally. We can be sure its techbro parents won’t accept responsibility for its failures.

Expand full comment
Taylor Smith's avatar

I suppose I am a “denialist,” since I have strong hesitations toward embracing this emerging tech. I’ll admit, I’ve never (willingly) used a generative-AI LLM thingy, but I am not sure I need to in order to know it’s not a good thing for me or the life I hope to live.

To me, this feels like one of those things that we were so excited we *could* do that too few asked if we *should*.

The thing to which I have dedicated most of my life—higher education (in the arts, more specifically)—used to be interesting and exciting. Now, it’s a borderline-futile funeral dirge.

Expand full comment
Tim Ellis's avatar

The best article I've read on how these chatbots actually work is here:

https://medium.com/@colin.fraser/who-are-we-talking-to-when-we-talk-to-these-bots-9a7e673f8525

That article is what converted me from an AI booster to a deep skeptic, because what is being marketed successfully as AI is not in any way intelligent and frankly doesn't seem all that useful because it seems to have too many inaccurate responses.

Expand full comment
Paul Topping's avatar

One thing I forgot to say in my earlier comment. Although I call them stochastic parrots, LLMs are very useful and I hardly go a day without using ChatGPT. It helps to have a solid knowledge of what they are good at and what they suck at. If the subject of your question is well-represented on the internet, then you will get a good answer as long as it is pretty much a search rather than requiring a long chain of reasoning. In other words, it is a better kind of search but it won't tell you when its answers are doubtful. (That's one thing the AI companies may be able to fix. Perhaps we will see LLMs reporting some kind of confidence score, though it may not be reliable. On the other hand, perhaps their Marketing Dept. will kill it.)

Expand full comment
Craig Yirush's avatar

Why use it when there are lots of credible places to find information on the internet? Why rely on some AI summary/regurgitation of info you have easy access to?

Expand full comment
Paul Topping's avatar

Spoken like someone who hasn't used it or used it badly. Let me give a flavor of the kinds of things I use it for.

1. In programming, I often have to come up with good names for things. Since an LLM is good with words, this is in its sweet spot. I just tell it the nuances that I am trying to capture and it gives me a few alternatives. I usually can pick one but sometimes it prods me to think of an even better one.

2. I have a set of headphones that have been acting up lately. Since they are a few years old, I would consider replacing them with a new model. I wanted to know if there were any similar models. I know that there are a lot of articles about headphones. I also know that the LLM's answer won't be definitive but it will be helpful.

3. Sometimes I know that some concept has a name accepted by the scientific community but I don't know what it is or have forgotten it. LLMs are good at this because they process words. All I have to do is describe the concept in my own words and add that I want to know what it's called and I get the answer right away. It is much harder to find the word or phrase with a Google search.

All of these things work for the reasons I mentioned in my comment. They are also things where I don't worry about the LLM getting it wrong. They often make mistakes but not on this kind of thing.

Expand full comment
David's avatar

Your second and third items are how I use AI (mostly Grok though I started with ChatGPT). I search virtually any product or service or need I have and get back generally very reliable info. If it is a big purchase (e.g. a home HVAC system) I look to various other internet searches for corroboration. But it is hard to beat the speed of the responses, or the ability to ask follow up questions about a small point of interest, or for more detail on understanding a major concept (e.g. how does the home air conditioning load calculation work).

On the conceptual level too, it is useful. If I am reading an article that depends on a John Rawls reference I can ask and quickly receive a perfectly suitable thumbnail on Rawls - and, having read Rawls 20 years ago, I get back enough depth that I can bring back to mind the philosophical thread. Since I am my own corroboration I have some level of confidence ai is not hallucinating. Perfectly useful, and it added depth to the reference used, and to the article, all in about 30 seconds. Which is a delight.

That said, yeah, it does make me think it’s still gonna kill us all, because improvement over that leaves a frightful prospect of over confidence and over use, and one really big mistake hidden in a bad assumption somewhere. But day to day life? A delight.

Expand full comment
Treekllr's avatar

The simple answer is bc its quicker and easier. Like googles ai answers 8 out of 10 of my questions satisfactorily(things i dont really need a deep dive into, just a quick answer). It gives me its sources which sometimes i check, but just as often dont.

To be clear, i hate this ai shit. But google forced it on me, and it gets used, bc i use google. And it does a good job(generally) quicker than i would searching through pages for answers.

People tend to go for the easy options. Thats as human as burying our dead. And in the end, easy wins out, even if it comes with a high cost.

Expand full comment
Craig Yirush's avatar

I wouldn’t trust its summaries if I were looking for info that mattered.

Expand full comment
Treekllr's avatar

Oh its like that? "..info that mattered" lol? So its info, regardless of whether or not its factually correct, just isnt good enough for your discerning tastes, is that it? Bc its very easy to check its sources, for those times when the info really matters. Ais output should be taken with a grain of salt, but so should most peoples as well(even the really smart ones, to err is human after all).

Whatever point you feel like you can stand on, thats fine. Just know youre displaying an emotional response, not the logical one youre pretending to. And i get that. Im offended by ais existence too. At some point though clinging to some old "before", bc you dont like the "now" just wont be practical.

Expand full comment
Craig Yirush's avatar

Oh, and maybe AI can teach you the difference between its and it’s, your and you’re.

Expand full comment
Treekllr's avatar

Geez.. i didnt realize i was talking to someone so petty(yes im saying YOUR petty;). Obviously im pissing up a pretty limp rope here

Expand full comment
Craig Yirush's avatar

Yes, accuracy matters you mindless troll.

Expand full comment
Sam Elder's avatar

Haven't you ever wished you could find information more easily than imagining a few search terms to type into Google? Haven't you ever wished you could not just write out a fully-formed question with all relevant details written out, but even respond to the results you get in a more iterative fashion than simply adding or removing keywords in your search?

Chatbots hooked up to the internet are a far better UX than traditional search, and they provide the relevant links so you can double-check original sources (or do whatever else, like order products) for yourself if needed.

Expand full comment
Craig Yirush's avatar

I like looking up my own information using either well vetted websites or stuff written on dead trees. I have zero confidence in unsourced information spat out by a bot.

Expand full comment
Sam Elder's avatar

As I mentioned, chatbots output sourced information these days. You can easily follow the links they provide to double check, just like you can on Wikipedia.

Expand full comment