Did I Pick the Worst Moment in History to Write a Novel?
AI might write an entire novel in a few seconds, but you could fill a book with what it doesn’t understand about its own characters.
Have you ever experienced a what-the-hell-am-I-doing moment? Maybe you know the kind. You wake up at three AM, wondering if your past decisions make you a buffoon. I’ve had dozens of those moments.
The first big one hit in my early twenties after an acquaintance from high school invited me to do a little felony drug running. I declined, but his job offer made me wonder if I received that prestigious opportunity because I was making all the right choices and I had the world by the tail.
(Answer: I did not.)
The most recent what-the-hell moment struck when it suddenly seemed ridiculous to have spent the last two years writing a novel. (Shanghai Lottery. 2026. Stay tuned.)
Any respectable novel goes through multiple drafts, and I finished my first draft roughly thirty-seven seconds before a person could produce a book simply by typing a prompt into a large language model. Instead of pouring two years into a passion project, some “writers” are investing all of two minutes. I may have picked the worst moment in history to become a novelist because the market is now flooded with AI-generated fiction.
Is it any good? Let’s look at a case study.
Last year, author and real-life person Curtis Sittenfeld competed against ChatGPT to produce a 1,000-word “beach read.” We, the readers, were invited to compare the two pieces and try to guess who wrote what.
Beach reads are not my bag, but Sittenfeld’s story held my interest long enough to finish because there was a human quality to the characters. I wondered what they would do next.
The AI story? I spotted the tells within the first paragraph. By the fourth paragraph, my mind was wandering. (Note to self: pick up a new miter saw from Home Depot.) There was nothing in ChatGPT’s story to hold my attention. My wife described it as “a sequence of events.” That’s a problem if you’re trying to produce compelling fiction. No one asks, “what’s the last great sequence of events you read?”
The fundamental problem with ChatGPT’s story was its AI-generated characters. They were hollow and unsettling. But why? What gave them those qualities?
For starters, the characters were hampered by descriptions like this:
“He was tall, with an easy smile and eyes that seemed to dance with mischief.”
I guess his mouth was Matthew McConaughey and his eyes were Fred Astaire hopped up on Mountain Dew. Setting aside the trite phrasing, that’s a disquieting combination of facial features. If you stop to picture an easy smile beneath a pair of wild eyes, Hannibal Lecter might come to mind. That would be fine, except ChatGPT was trying to create an alluring love interest, not a psychopathic killer. This description puts the character smack-dab in the middle of the uncanny valley.
Then there are phrases like this. It’s a description of the first conversation between two characters who were (allegedly) flirting with each other:
“They spoke about their lives, their interests and the little quirks that made them unique.”
Oh, did they? That’s nice. I guess ChatGPT followed Hemingway’s advice to writers: “Never burden the reader with intriguing details about a character, especially interests and/or quirks.”
I could carry on with examples of ChatGPT’s inept attempts to illustrate humanity, but there’s one line in particular that showed me the robot simply doesn’t understand us humans. In this passage, the main character felt ambivalent about approaching a man she found attractive:
“Lydia hesitated, her mind a whirlwind of conflicting emotions. On the one hand, she felt the tug of familiarity and routine. On the other, she felt the irresistible pull of something unknown, something that reminded her of her own lost desires.”
This appears to be an attempt to show internal conflict, but we don’t get to know why she’s pulled toward the familiar, or what draws her to the unknown. We never get to see the nature of her “lost desires,” whatever that means. We know nothing of her history, her wishes, or her fears, so we have no basis for comprehending her struggle. There is nothing for a reader to care about in this character.
But enough about Hannibal McConaughey and poor soulless Lydia. Let’s talk about me. Shanghai Lottery is my first attempt at a novel, so I gave myself guardrails from the outset.
For example, I made myself stick to the hero’s journey. Rocky. Skywalker. Nothing fancy. I can play with structure in future projects, but for now, I need to keep it simple.
For the same reason, I wrote a first-person narrative. It’s harder for me to screw up first-person. Not impossible, but harder.
Here’s the guardrail pertinent to this discussion: I gave my protagonist a personality style I understand. I know what makes the character tick, so it’s hard for me to make a mistake with him. I understand how he sees others, how he believes others see him, and why he fears the thing he wants most out of life.
The conflict in Shanghai Lottery doesn’t flow from the plot… though it’s an entertaining plot. It doesn’t even flow from the antagonists… though early readers love my bad guy.
The conflict flows from my protagonist’s defenses against the world. He is the source of his own problems. But it’s not my job to explain that in the book, or even to mention it. My job is to toss my protagonist into the events of the story and give him just enough to survive. Or as a writing mentor put it, my job is to beat the ever-living fuck out of my main character.
You get the fun of interpreting his motives and choices. Subtext is captivating. (Check out Hills Like White Elephants. It’s short, it’s pure subtext, and it’s beautiful.)
The most intriguing aspects of people are the unseen and the unspoken; the invisible hand within each of us. My main character doesn’t talk about his internal conflict because he can’t see it. He doesn’t know himself.
But YOU can see it. I can see it. And AI cannot. I think that’s why AI creates colorless, hollow characters that are difficult to care about. AI can only mimic what it can see, and the best parts of us are tucked away, out of sight.
Maybe I’m shining myself on. That would be a human thing to do. But I don’t think I’m wasting my time on this book because I don’t believe I’m competing with AI. If you’re writing about human beings, neither are you.
—
Side note: Mike Todasco gave ChatGPT a chance to clean up its story. To me, the revised characters seem even less human. On the plus side, the chatbot developed a weird foot fetish. I think it was a failed metaphor. Whatever the case, the oddness of it was enough to push me through the story. Technically, I guess that’s a win for ChatGPT.

