I think there's something in there with the character hierarchy of screenwriter vs novelist vs poet; it seems like the screenwriter in the story writes to make a living, the novelist does it for prestige, and the poet does it largely for the love of the game. The screenwriter is on board with AI until he realizes it'll hurt him more than it'll help him--ironic since he had been excited about being able to use different actors' likenesses!--and the whole time he's looking down at the poet like "Oh, god, if all this takes off I'm going to be as poor and pathetic as that guy." (Which raises interesting questions about the poet's stake in all of this: he doesn't actually have much to lose here, considering how little money or recognition he gets in the first place, but he's helping the other two guys anyway.) The novelist is rallying against the AI, but he's also initially disappointed to find out that his work wasn't important enough to use in its training data... and then later gets a kind of twisted thrill when it does actually quote his own work back at him. I dunno. I think it's a messy story in the same way that the conversation about AI and the arts is itself messy, which I like. And I always appreciate a story that leaves me with questions to mull over instead of trying to dump a bunch of platitudes in my lap :P
For example, if it had ended a few sentences earlier and used that potential bit of metafiction it would be suggesting that the story we just read was or at least could be the story written by the AI for the novelist and now the AI does understand their frustration but represented itself as not understanding it. That gives us a great deal to think about and builds in a second perspective on the entire piece, the perspective of the AI. But as written that only works well with the conversation part of the story and those last few lines make it really not work at all.
Question: why do people write and buy new books? There are millenia of amazing works, but people prefer to read new things. Some gems persist, but even my kids read entirely different YA novels then I did as a kid.
Art is communication. A big part of why we like new art is because it reflects the culture and mores of the now. LLMs can only give you the most like next token prediction - it is antithetical to what we want when we interact with art.
Not to say it can't spit out reasonable plots for shows or be a valuable aid to writers, but I think it will continue to serve best as an aid to humans, even more so than in coding. Maybe artists will turn into LLM wranglers like software engineers are starting to, but a human-in-the-loop is going to remain valuable.
> “Writing a book is supposed to be hard,” he said.
> “Is it, though?” said the AI. The novelist wasn’t sure, but he thought he detected a touch of exasperation in the machine’s voice.
> “Perseverance is half the art,” he said. He hadn’t had much natural talent and had always known it, but he had staying power.
It's this right here. I don't think any LLM-based AI is going to be able to replace raw human creativity any time soon, but I do think it can dramatically reduce the effort it takes to express your creativity. And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.
I think a lot of the Luddite-esque response to AI is actually a response to this realization happening at a subconscious level. From the gifted classes in middle school until I was done with schooling, I can always remember two types of students: those that didn't work very hard but succeeded on their talents and those that were otherwise unexceptional beyond their organizational skills and work ethic. Both groups thought they were superior to the other group, of course, and the latter group has gone on to have more external success in their lives (at least among my student peers I maintain contact with decades later). To wit, the smart lazy people are high-ranking individual contributors, but the milquetoast hard workers are all management who the smart lazy people that report to them bitch about. The inversion of that power dynamic in creative and STEM professions... it's not even worth describing the implications, they're so obvious.
Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI; how does my prompt for an epic space opera vary meaningfully from yours? In other words, everything is reduced to an individual's unique perspective of things (and how they encode it into their communication to the AI) because the AI has normalized everything else away (access to vocabulary, access to media, time to create, everything). In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased.
We can hand wring about whether that reality ever can exist, whether it exists now, whatever, but the truth is that's how AI is being sold and I think that's the reality people are reacting to.
This requires the machine to understand a whole bunch of things. You're talking about AGI, at that point there will be blood in the streets and screenplays will be the least of our problems.
I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.
The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.
That's the truth right now, but that's merely a limitation of the technology. Particularly if you imagine arbitrarily wide context windows such that the LLM can usefully begin to infer your specific preferences and implications over time.
> The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.
Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.
It's like conventions in art: you could make Casablanca much more easily today than in 1942. But if you made it today it would be seen as lazy and cliche and simplistic, because it's already been copied by so many other people. If you make something today, it needs to take into account that everyone has already seen Casablanca + nearly 85 additional years of movies and build on top of that to do something interesting that will surprise the viewer (or at least meet their modern expectations). "The best created human works" changes over time; in your proposed world, it will change even faster, and so you'll have to pay even more attention to keep up.
So if you're content to let your AI buddy cruise along making shit for you while you just put in 1 hour a day of direction, and someone else with about equal natural spark is hacking on it for 10 hours a day—watching what everyone else is making, paying much more active attention to trends, digging in and researching obscure emerging stuff—then that second person is going to leave you in the dust.
> Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.
Again, it's a false dichotomy. What you described was just "super super smart", not what I suggested as "smart + hard worker: "In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased." That's not hard work + smart, that's "generationally smart genius." And that set is much smaller than the set I'm talking about. It's very easy to coast on "gifted but lazy" to perpetually be a big-fish-in-a-small-pond school-wise. But there are ponds out there full of people who do both. Twenty or thirty years ago this was the difference between a 1540 SAT score, As/Bs in high school, and going to a very good school and 1540 SAT score, A's in high school with a shitload of AP courses, and significant positions in extracurricular activities, and going to MIT. I don't know what it looks like for kids today - parents have cargo-culted all the extracurriculars so that it now reflects their drive more than the kids' - but those kids who left the pack behind to go to the elite institutions were grinders AND gifted.
Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.
EDIT: here's a scenario where it'll be harder to be lazy as a software engineer already, not even in the "super AI" future: in the recent past, if you were quicker than your coworkers and lazy, you could fuck around for 3 hours than knock something out in 1 hour and look just as productive, or more, than many of your coworkers. If everyone knows - even your boss - that it actually should only take 45 minutes of prompting then reviewing code from the model, and can trivially check that in the background themselves if they get suspicious, then you might be in trouble.
It's an insightful point, but I think there's more going on. It seems that quite a lot of the people consuming media and art do actually care how much it's the product of a human mind vs generated by a machine. They want connection with the artist. Maybe it's a bit like organic produce. If you give me a juicy white peach, I probably can't tell whether it's an organic one, lovingly raised and harvested by a farmer with a generations-in-the-family orchard, or one that's been fertilized, pesticide-sprayed, and genetically-engineered by a billion dollar corporation. But there's a very good chance I care about the difference. I'm increasingly getting the impression that a big swathe of consumers prefer human-made art. Probably bigger than the percentage that insist on organic produce. There will be a market for human-created works because that's something that consumers want. Yes, some authors will cheat. Some will get away with it. It'll start to look a lot like how we think of plagiarism.
Maybe the strength of that preference varies in different parts of the industry. Maybe consumers of porn or erotica or formulaic romance or guilty pleasure pop songs don't care as much about it being human-produced. Probably no one cares about the human authenticity of the author of a technical manual. But I suspect the voters at the Oscars and Grammys and Pulitzers will always care. The closer we are to calling something "art", the more it seems we care about the authenticity and intention of the person behind it.
The other thing I think is missing from the debate is the shift from mass-market works to personalized ones. Why would I buy someone else's ChatGPT-generated novel for twenty bucks when I could spend a few cents to have it generate one to my exact preferences? I'd point to the market for romance novels as one where you can already see the seeds of this. It's already common for them to be tagged by trope: "why choose", "enemies to lovers", "forced proximity", etc. Readers use those tags to find books that scratch their very specific itch. It's not a big jump from there to telling the AI to write you a book that even more closely matches your preferences. It might look even less like a traditional "book" and more like a companion or roleplay world that's created by the AI as you interact with it. You can see seeds of that out there too, in things like SillyTavern and AI companion apps.
This was a really entertaining read, do any of you have similar contemporary stories to share?