80 Comments
User's avatar
Tobias Mark Jensen's avatar

Great post. However, generative AI are trained to spit out work that is superficially pleasing to the human eye and mind. On this background, I don't think it's that surprising that average humans will rate AI-generated poems or translations over those of legendary poets and experienced human translators.

First of all, the quality of art and writing is subjective compared to science and math. Secondly, meaningful and deep creative work is often difficult to consume and understand. I suspect that many people would also prefer a BigMac over a three-star Michelin dish if they didn't know what was what.

My point is: perhaps the fact that people are choosing AI-generated over human-generated creative work tells us more about (online) culture than the capabilities of AI? After all, AI-generated work is only impressive because the AI has been trained to replicate the work of humans based on millions and millions of examples. AI still lacks any authentic world knowledge, which is why it's hard for me to see how they could express anything original about the world that is not based on combinations of previos work from their training data.

Expand full comment
Kathleen Weber's avatar

Four points:

The AI generated translation of Homer might be more pleasing to a modern American mind, but is it as accurate? Does it reflect and suggest ancient Greek culture as well as that translated by an expert?

Since AI is trained on the Internet, I view it as possessing a mid-brow cultural competence. Surely, a mid-brow audience would be able to connect more easily with poetry based on a mid-brow knowledge base. In a similar way, unsophisticated Americans are moved by Hallmark verse. Since poetry plays so small a role in our educational system, I think that only one in 10,000 Americans could be ranked as very familiar with the works of the great poets. Some participants in this study identified themselves as knowledgeable— quite frankly I doubt it.

My personal sphere of expertise is visual arts. I got a BFA. I am certain that I have never been fooled by an AI generated image. They stand out like absolute sore thumb. I have often tested myself on my ability to guess what century an image was made in, and I achieve about 90% accuracy. I would be happy to be tested on that question.

Some months ago, Thomas Friedman was gushing about an AI generated poem about his wife's non-profit organization. It was sappy drek.

Expand full comment
Andrew Hastie's avatar

I think these are really good questions! And I think that the key to understanding the problem of AI that Yascha brings up here is in considering both possible answers to your questions.

For example: Does [the AI] reflect and suggest ancient Greek culture as well as that translated by an expert (maybe in ways maybe only an expert could detect)?

Let's say it doesn't right now? Is there any hard limit stopping it from doing so? Would a better model, a tighter more expert set of sample translations than the full internet canon (per your observation) fix this?

It could even be something more concrete and difficult to reach like - 'there aren't enough good expert translations of the odyssey to train a good model (yet)'.

None of these hurdles are impossible to overcome. If a human (organic machine) can do it, likely some sort of simulated model can too. Even now we're integrating grown organic machine-like brain tissue into metal machines and seeing results much like a metal neural net would produce.

And then on the other side, let's say we look into it and determine that the AI actually is much better at capturing the nuances of the Greek? Maybe human brains, with chemical components, just get tired the way a human arm does and a robot arm mostly doesn't. This is already the case in chess: human players make mistakes, deep mind doesn't.

What if we discover that the error prone human mind really does mess up translations more than a steady AI trained on human input? To Yascha's question: does your/or any identity rest on the assumption this will never happen? I hope that mine does not.

Expand full comment
Treekllr's avatar

Thats kind of beside the point. It doesnt matter if ai is standing on the backs of previous and current humans. What matters is that it will do these things faster, cheaper, and better(giving that the subjective consideration it warrants, but still) than humans now can.

Thats kind of whats hilarious about it. That we're giving it the best of us, freely(often stolen).

And as the article pointed out, it will only get better and better at doing this. What kind of ai are we going to have when we probably soon combine it with quantum computers? If not then, at some point it will exceed the complexity and ability of the human brain. What it does after that is anybodies guess(mine is itll keep us as pets lol, bc thats what we do. But maybe it wont be so entertained with us. It will eventually be able to self replicate, innovate, and ultimately take care of itself).

So yeah, it is as you say, for now. But it wont stay that way.

Expand full comment
ioanna carlsen's avatar

yes, and the people who prefer AI are the dumbdowned Americans Susan Sontag labeled in the 60"s...........decades ago, who have now brought in the orange creep they think of as a leader............same difference.........no alternative fact equals the truth, the real poet or leader..........

Expand full comment
Treekllr's avatar

Not everyone who voted for trump is a maga whatever. It might be worth your time to ponder why alot of good and honest and not stupid people voted for him. Bc until we understand that, we wont have any unity

Expand full comment
Quentin Hardy's avatar

"A lot," not "alot."

Expand full comment
Treekllr's avatar

What a valuable contribution youve made! It really helps, like ALOT!

Now if only we could get a bot to go around and correct every bodies grammar, it would free you of this tedious preoccupation

Expand full comment
The Radical Individualist's avatar

l'd like to offer a few observations.

For the record, while Copernicus determined that the universe did not revolve around the earth, he did believe that the universe revolved around the sun. And, in physics, everything is relative to whatever frame of reference you care to use. It is not mathematically incorrect to use the earth as a reference for the entire universe. But the math becomes simpler when we have the earth revolving around the sun, and the sun circulating within the Milky Way galaxy. And who doesn't like simpler math? Still, the pope wasn't entirely wrong in his disagreement with Galileo.

As for poetry, let's reverse things. Can AI READ poetry and feel enlightened? Can AI read another AI's words, and feel a connection? We do well to remember that AI does an incredible job of mimicking the works of humans, but it is only mimicking. It has no soul, no personality, no hopes, dreams, wishes or resentments.

In this age of technology, perhaps the functionality of AI has supplanted the desirability of the sense of accomplishment in making our own creations. "Look! AI did this in seconds! Why should I go to the trouble of reaching inside myself and attempting to extract my deepest thoughts and feelings, when I can just push a button?!"

Every day, I cuss out this device I am typing on right now. I'm old enough to be experienced on manual typewriters. I don't want to go back to them, but I do have a fond feeling for machines that, correct or incorrect, printed exactly what you typed. The 'smarter' our devices are, the more you have to redirect them to what you want, instead of what they have been programmed to do. And I will never authorize a computer to do my writing for me, even if it's better at than I am.

Here's a post a did a while back, exploring some aspects of AI

https://open.substack.com/pub/individualistsunite/p/thinking-inside-the-box?r=z324w&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Alan's avatar

Right now the AI that most people are familiar with is just a language model. Your question of whether AI can feel a connection or emotion would be like isolating the language/speech centers of a human's brain and asking the same question of just that part of the brain. Of course, the answer is no. Will humankind in the future try to mimic consciousness using computational neural networks in a computer? If so, and if we model all parts of our brains and connect them together, then the answer to your question above may be yes. After all, why do we have emotions or feelings when all our minds are doing is transmitting information between neurons in much the same way. Not to be overly dramatic, but one day a few thousand years hence, AIs may be lounging about wondering what it was like to be made of flesh, the same way we wonder about what an Australopithecus was thinking about.

Expand full comment
The Radical Individualist's avatar

Great observations. I wonder about my own dog. He has an awareness that is equal to my own. Sometimes I think his perceptiveness exceed my own. On the other hand, he is not particularly rationally intelligent. He will never do higher math, or even low level math. But he understands my wife and me. It's hard to say if we control his life or he controls ours. Some of both, I guess.

Maybe I have too much time on my hands, but I wonder about earth worms. Do they savor the dirt they eat? Do they do more than just organically react when the earth is too dry or too wet? Do they look for someone to blame when they get caught by a robin? Do they know they are being eaten? The answer to all of those is probably "No", yet the earthworm mostly makes the right moves in order to survive.

But the bottom line is, I think we have a soul. I use that term loosely. I don't claim to know what it is. But we are more than the sum of our synapses. I don't think there will ever be a Hal that cares even as much as earthworm about survival.

Expand full comment
Gabriel Conroy's avatar

My comment here isn't directly relevant your comment, but for some reason your comment made me think of this:

I keep a private, handwritten diary/journal. I've been lax about making entries the last several years, but I return to it. My goal isn't really for anyone to read it. It's not really interesting to anyone other than me, and possibly my spouse should I die before her. It's mostly personal stuff about feelings, etc. But I feel the need to do it, and I get something from it. I don't know whether to call it self-fulfillment or something else. But I do it because I want to.

Now, I can turn around and say "no AI will ever feel the need to write a journal for itself." Maybe that's. But that's not exactly the point. I'll still feel the need/desire/wish to keep a journal regardless of whether an AI bot (or whatever we'll call it) does so. Or I'll stop feeling that need/desire/wish. It's not that AI can't influence me on that point. It's not even that AI can't take it away from me. I can see myself being so caught up in AI-generated "content" that I cease to be a person wholly independent of it. But there's still an ounce of a "me" left over, at least now.

By the way, I too remember having a typewriter. (I'm 51, if that means anything.) In its own way, it was frustrating, in part because I made a lot of typos and fixing them was a chore. Like you, I don't wish to return to them. But also like you, I still have a fond feeling for them as well.

Expand full comment
The Radical Individualist's avatar

We have become so technology oriented that we have abdicated a lot of personal responsibility and even a lot of our personal identity. If we are sick, we expect a doctor to heal us when we might have avoided illness by being more personally responsible in our lifestyle. We want governments to provide jobs and housing instead of working things out for ourselves.

And now we want machines to do our thinking for us. The question is not just what AI can do but to what extent should we let AI do it. Just within my lifetime I've seen increasing numbers of people turn from personal responsibility to expectations of being provided for. AI will never solve that problem.

Expand full comment
James Quinn's avatar

I suppose the next step would be to have a Constitution generated by AI, to see if it provides us with a better model than the original.

Expand full comment
Richard Bennett's avatar

Several cosmologist and philosophers, such as Alfred North Whitehead have concluded that the object of living beings is to create a more moral and loving context in which to exist. Do you see that as something which AI can conceive , advance and create on its own?

I still maintained that AI is a creator of synthesized works,using as it's media, the creative product of human feelings.

Expand full comment
James Quinn's avatar

I imagine that given enough input, AI could generate a wonderfully complete and complex code of laws, although of course it would be fascinating to see how it would cope with the different views on legal codes that exist around the globe. But one does wonder if a non-human intelligence could generate anything regarding ‘the arc of the moral universe’ without such extensive human input as to render the machine version anything other than an echo.

Being an anthropologist by formal training, although I haven’t actually practiced as a profession outside of some field archaeology, my sense is that Whitehead is probably being a bit optimistic in that assessment. I have long since taken my sense of our being from what I’ve come to believe to be our four part nature - the animal, the poet, the builder, and the would-be god. We are the often internally contradictory combination of our fears, our dreams, our plans, and our hubris.

I do think, with Keats, that “Beauty is truth, and truth beauty”, and that it is a form of both that we seek, in spite of the fact that while that search has sometimes led us to the greatest heights we have achieved, distorted versions of it have also far too often led us do things to ourselves that we afterwards cannot conceive that we could have done.

The author and playwright Robert Ardrey in his book African Genesis referred to us as ‘the storm animal’. I suspect that’s a pretty good thumbnail description.

Expand full comment
Scott Burson's avatar

Ugh! Terrible idea.

Expand full comment
Dan Colbert's avatar

I haven't read these AI poems, so I really can't comment with authority, but I am deeply skeptical of drawing the conclusion that simply because they are deemed "better" by most people (even experts, whatever that means here), they really are. It has been clear to me for quite some time that almost no one reads poetry anymore, let alone understands it. What people try to pass off as "poetry" now is generally prose that's been rearranged on the page to appear in gross form like poetry. It's not. The use of metaphor, indeed of all types of irony, is nearly dead, and so necessarily, is any understanding of it. Meter is mostly dead. Elevated language - the heart of poetry - is mostly dead. Perhaps AI "poems" can compete somehow with modern non-poetry, but I'm highly skeptical that it has anything on true poetry.

Expand full comment
Treekllr's avatar

Idk, perhaps bc its taught on the best of humans efforts it can or will be able to use effectively all those techniques that are nearly and mostly dead in humans. Its not limited by our shortcomings. If you teach an ai to do those things, will it not do them? I dont see why it wouldnt

Expand full comment
Quentin Hardy's avatar

"What will be left of our self-conception once artificial intelligence becomes better than us at writing poems or making movies?"

And, once hummingbirds have mastered the French language, will they use the subjunctive case in ways familiar to us, or will that provide insight into their special consciousness?

Hypothetical questions need some evidence that they will become true in order to be interesting. In order to write a good poem an AI would have to feel like writing a poem, which is an extraordinarily complex moment that scientists have no clue how to replicate, let alone engender as an ongoing novelty.

Expand full comment
Dan Colbert's avatar

Extremely well said.

Expand full comment
Leo Francis's avatar

I like a lot of Yascha Mounk's writing, but I believe this article goes badly wrong at this point:

"But it is clear that the miracle of our minds is ultimately produced by an intricate arrangement of physical matter."

That's not clear, and it should not be asserted as such. Certainly, there are many who would agree with that statement. But it still cannot be asserted as some sort of established fact.

Scientists ultimately do not know what consciousness is. And that statement may very well make a lot of people uncomfortable, but we should not therefore simply try to brush past it.

Science, obviously, helps us to explain a lot. But it is, ultimately, an objective endeavor. And it is therefore inevitably limited. It cannot, for example, explain why we dream. Or even what dreams are. No doubt there are theories. But for these and similar questions that are ultimately of a subjective nature, science may not really provide the best approach.

And that's why so many people, including Mounk, ultimately make incorrect assessments about AI. We cannot accurately address the topic of AI without an adequate insight into what makes us human. And, as the quoted statement above makes clear, Mounk lacks that insight.

The fact of the matter is that AI cannot write. Nor can it produce any kind of art. And it never will be able to do so. Hence, I doubt that humanity will ever be humbled in the way that Mounk predicts. Certainly, AI can string words together, and it can sometimes do so in an effective manner. But that's not writing. To describe that sort of activity, we will need to come up with a different word altogether. Because if we continue referring to it as writing, we will simply continue sowing confusion.

Actual writing, as well as all artforms, require consciousness. Without insight, creativity, inspiration, and other mysteries of consciousness, we cannot actually create. We can only collate. AI is thus limited to collation. And there may very well be important applications for that sort of collation. But, again, we need to specify very carefully, perhaps with the help of a new vocabulary, what AI can do as well as what it can't do.

I would even go so far as to predict the exact opposite of what Mounk expects. Rather than humbling humanity, AI might wind up exalting it. We may finally be forced into a greater appreciation of the wonders that only those with consciousness are capable of.

Expand full comment
coffeebits's avatar

An interesting article! I'm going to quibble with you about the importance of the subconscious, though... It is the third great humbling, and we find ourselves at the brink of the 4th. Is it really of little import to you to realize that you aren't yourself, do not know yourself, are not perceiving the world around you but a carefully manicured and pre-interpreted representation of it, and do not command your own actions?

Expand full comment
Alan's avatar

This can be tied into the question of whether we actually have free will or whether physics and the biomechanical/computational nature of our brains makes each one of our thoughts completely deterministic. Most people don't think about that, but it is a humbling, we are just wet machines with only imagined free will. And as I said in two other comments, what most people think of as AI today is just a language model; we wouldn't be very creative or emotive if our brains were reduced to just our language and speech centers. Should humanity go the full route and try to model reasoning and consciousness, then the resulting AI will be truly a humbling and possibly eventually a replacement just like we replaced earlier hominids.

Expand full comment
Disek Jong's avatar

Perhaps the greatest benefit of AI is the simplest and least recognized of all: it induces us to ask what are we? It forces us to take on the subject which most seem to have become least interested in even as none could possibly be more important!

Expand full comment
Leo Francis's avatar

LOL I wrote something very similar at the end of my (long) post above. Yes, I do think this might ultimately prove to be the most important benefit of AI (and its one that Mounk fails to adequately appreciate): we may gain a deeper understanding of what it is to be human!

Expand full comment
Rob MacLean's avatar

I'm a bit of a stochastic parrot myself. I am already surrounded by superior talents in philosophizing, writing, playwriting, acting and singing, and I have come to terms with it. Acting and playwriting are the only two I've made money at, but I'm not a top-tier talent, I'm more of a permanent journeyman.

I used to play chess avidly. Not expertly, but with absorption and pleasure. If you had asked me thirty years ago if a computer defeating a world champion in chess would kill the game, I would have said, "yes, absolutely." How demoralizing to realize that no matter how hard you train, an app in your pocket owns you at the board. But, no, chess is more popular than ever. Expert players are still idolized and followed avidly despite not being able to defeat even feeble chess programs.

My point is not to suggest that companies will still hire human creatives to write and design if cheaper, high-quality alternatives are to hand. They won't. My point is: the human psyche can adjust to not being the best, all of us have had to face that in our lives at a personal level.

Despite our limitations, the urge to reflect and create is a pressing need for many humans. The real crisis of self-image will come when we prefer to have an a.i. do those things for us rather than make any effort to do it ourselves. I think I just thought of that on my own!...no, wait...I may have read it somewhere...

Expand full comment
Sally Arnold's avatar

That would take all the fun out of it. Making art is valuable for the doing of it.

Expand full comment
Tim Kosub's avatar

Just a brief complaint about a wonderful piece: the shot at Freud’s contemporary refuting applies equally to Copernicus—the sun is not the centre of the universe—and to Darwin—his mechanisms of evolution included Lamarkian evolution as well and natural selection, and his gemmule model of what we now call genetics remain only as historical interests. Leave Siggy be.

Expand full comment
Sara Barker's avatar

I run a theatre company and expect a resurgence in interest in live in-person performances due to the points you bring up here. I believe there will be a resurgence in both actors wanting to perform on stage and in audience.

Expand full comment
Trail Alby's avatar

I remember having conversations with friends about this exact thing in the 1990s. We were talking more about computer animation, but the general idea was the same.

Expand full comment
Jeffrey Quackenbush's avatar

I'm skeptical of the survey you've cited as being proof that AI systems can write "good" poetry. A few reasons:

1) Poetry is highly under-theorized as an art form, and aesthetic judgements about poetry are mostly intuitions, rather than critical, informed appraisals, even among people who write poetry or write about poetry professionally.

2) Poetry, as an art form, has mostly been left for dead since the 1950s in popular culture. This has lead to a preference for "poetry" that literalizes its "meaning" and runs away from linguistic development; exactly not what good poetry should aim to accomplish. The popularity of writers like Rupi Kaur is a testament to the desolate state of the art and its appreciation.

3) A great poet doesn't sound like other poets. He has a *new* voice that isn't like other poets, and he aims to surprise at every turn. This is exactly the opposite of what AI systems are meant to do: to conform to expectations and not to surprise. Any poet who cares about the art form should aspire to "make it new".

Expand full comment
Juan Jose Gomez's avatar

Great post, as usual. For the sake of discussion, let me propose a Gedankenesperiment. Suppose that new regulations freeze the models at their actual state-of-the-art. That is, the model are no longer fed with human data harvested from internet and elsewhere (we could assume that they can, however, exchange synthetic data with each other). Suppose that the freeze last, say, 100 (or 1000) years and then we examine again the ability of the models to do math, physics, poetry or music. My guess is that, 1000 years from now, computers would still beat us at chess, but their poetry would be easily recognised by any human, and found out as "old fashioned".

So, what is my point? I do not deny that current models are creative, in a way. They can generate sofware code (I use them all the time for that), and amazing images and poetry... perhaps not so suprising since they have access to virtually all poetry ever written by humanity! So, I would say we have an asymmetric fight here. Shakespear vs GhatGPT which includes Shakespeare, Homer and everybody else. It is indeed amazing that ChatGPT can synthetise new poetry based on what all those humans have done so far, but, if let alone, will ChatGPT come with the next revolution in science or poetry?

Yes, it can fake Picasso and cheat most of us into believing that his syntehetic work is either Picasso or better than Picasso, but will it create the next wave of creativity in painting? Will invent something equivalent to the elegies of Duino (not variants of these), cubism, or the Gödel theorems? Will it discover Grand Unifications theories? Perhaps, if we keep feeding the AI with everything we know, they will, since at the end of the day, the amplification effect is enormous. But if the models are frozen, my guess is that 1,000 years from now, they will be "obsolete" compared with human minds.

But they will not be frozen, of course, and that brings are straight into a new Brave World hard to predict...

Thanks again!

Expand full comment
Alan's avatar

As I mentioned in the comment above, right now the AI that most people are familiar with is just a language model. Your question of whether AI can be truly creative would be like isolating the language/speech centers of a human's brain and asking the same question of just that part of the brain. Of course, the answer is no. Will humankind in the future try to mimic consciousness and reasoning using computational neural networks in a computer? If so, and if we model all parts of our brains and connect them together, then the answer to your question above may be yes. After all, why do we have the ability to create and imagine and invent if all our minds are doing is transmitting information between neurons in much the same way. Not to be overly dramatic, but one day a few thousand years hence, AIs may be lounging about wondering what it was like to be made of flesh, the same way we wonder about what an Australopithecus was thinking about.

Expand full comment
Kathleen Weber's avatar

Four points:

The AI generated translation of Homer might be more pleasing to a modern American mind, but is it as accurate? Does it reflect and suggest ancient Greek culture as well as that translated by an expert?

Since AI is trained on the Internet, I view it as possessing a mid-brow cultural competence. Surely, a mid-brow audience would be able to connect more easily with poetry based on a mid-brow knowledge base. In a similar way, unsophisticated Americans are moved by Hallmark verse. Since poetry plays so small a role in our educational system, I think that only one in 10,000 Americans could be ranked as very familiar with the works of the great poets. Some participants in this study identified themselves as knowledgeable— quite frankly I doubt it.

My personal sphere of expertise is visual arts. I got BFA. I am certain that I have never been fooled by an AI generated image. They stand out like absolute sore thumb. I have often tested myself on my ability to guess what century an image was made in, and I achieve about 90% accuracy. I would be happy to be tested on that question.

Some months ago, Thomas Friedman was gushing about an AI generated poem about his wife's non-profit organization. It was sappy drek.

Expand full comment
souz's avatar

Keeping it simple...Humans are part of a planet that is an organic synergy...whose continuity is based on constant renewal. We basically exist to savour/make the most of the cycle of *living*. Left to their own devices, humans and the rest of earth's creatures have been going about *surviving* whichever way they best could. Whether one views AI as either an extension of human potentiality or its ultimate replacement, best we ask what does AI get out of all this, really?

Expand full comment