What Exactly Makes Writing Sound Like AI?
My copywriting clients have split into two pretty defined camps when it comes to their stance on AI. Most of them explicitly do not want AI involved at any stage in the process (this is the largest group in part because this is the kind of stuff I prefer to write, so these are the clients I purposefully seek out). There are a few that take the opposite approach, though—where I'm either hired to edit AI-generated text and make it sound “human” or I'm given a topic and/or prompt and asked to create copy and refine it to make it publication-ready.
Because of this, I get a lot of first-hand experience with AI writing. I also regularly use AI checkers, and have found that they vary dramatically in the accuracy of their results. I would say that a well-honed human reader is going to be better at spotting AI text, because it definitely has a distinctive tone unless it's prompted very well. I've also noticed some specific phrasings, punctuation, and sentence structures that often come up in AI-generated content. All that said, the difficulty that AI checkers often have separating human from AI text is a sign of how tricky it can be to identify exactly what gives writing that AI vibe.
Common signs of AI text generators
One thing I'll say to start here is that this is definitely a moving target. ChatGPT (and similar programs) are constantly training their algorithms and regularly release new models that may sound different than previous ones. This is part of why AI checkers can struggle—by the time they learn the common patterns of all the current AI generation models, new ones have likely come along that change things up.
The other difficulty here is that none of these AI flags are sure bets. The technology has developed to the point that it has a basic grasp of grammar, voice, and tone. All of its typical patterns are also things you'll find in human writing, just maybe not as frequently, or not used in quite the same places or ways that an AI does. Telling the difference comes down to very subtle details and nuance, especially if it was generated by someone who understands how to use prompts well.
All that said, there are a few things that AI often does, like:
Em-dash use
The em-dash is one of my personal favorite pieces of punctuation because of its versatility. It's a powerful little tool for creating clear complex sentences and controlling the rhythm of sentences, but because of that human writers often overuse it—and AI is downright obsessed. It'll throw an em-dash in just about anywhere, and while it usually makes sense grammatically, it's often not the best punctuation choice for the moment and the sheer volume of use would get a flag from any copyeditor, regardless of author. It's gotten to the point that, when I give AI prompts, I'll tell it explicitly not to use em-dashes. And it still does most of the time, but a lot fewer of them than it would otherwise.
Colons in titles and headers
Human writers commonly use colons in titles for academic work, and they're not uncommon in human-written blog posts or article titles in general. Again, it's the volume of use that can point to an AI author. Left to its own devices, I've seen AI produce blog articles with a colon in every single header. Connected to this, the header titles are often quite long and have an SEO-ey vibe, like most of the words are only there to check keyword boxes.
Direct repetition of ideas and phrases
It's not uncommon for writers to restate the same or similar ideas over the course of an article, story, etc. Sometimes this repetition is done with a purpose to reinforce key themes or concepts. Sometimes it's just because the writer forgot they already said the thing in an earlier section. But even so, it's fairly uncommon to see the exact same phrase multiple times in a short text written by a human. You will see this with content written by AI, however. When I'm editing AI-generated text to make it sound more human, a high percentage of those edits are usually cuts of repetitive phrases, or even entire paragraphs that express the same basic ideas multiple times across the work.
“By doing X, result...” sentence construction
AI favors very structured and obvious transitions between paragraphs and ideas. This is a good thing at its core, and something human writers often try to do, too, especially for how-tos, sales copy, and similar nonfiction content where you want to give the reader a clear sense of progression. This particular transition phrase is a band-aid on an AI scratch, and has been consistently across the last few ChatGPT models, to the point that I've become suspicious when I see it in blogs or other content I read online.
AI's other favorite words and phrases
Each AI algorithm seems to have its own set of pet words and fall-back phrases. Note here that just seeing these in a piece of writing doesn't mean AI was involved, but these are things that come up far more often in AI-generated text than human-written stuff:
- Delve/delve into
- Underscore/underpin
- That being said...
- Takeaway/key takeaway
- Generally speaking/typically
- Transition phrases like on the contrary, conversely, in conclusion, along with, furthermore, moreover, etc.
- Buzzwords like streamline, innovative, game-changing, scalable, seamless, etc.
There's also the factor that AI has a huge vocabulary compared to the average human. Especially if you're using it for more business-like, formal content, it's going to draw on that vocabulary and include words your average person would never think of, like saying something's “arduous” instead of difficult, or saying there's a “plethora” or “multitude of” things instead of a lot. Granted, a lot of these are words that writer-types would use, and just one or two is probably just a sign the author has a good-sized vocabulary. If there are SAT words in every sentence, though, there's a strong chance that was AI.
AI checkers side-by-side
Mostly to satisfy my own curiosity, I decided to give a couple AI checkers some pieces of text to chew on and see what they made of them. To get the most accurate comparison, I used identical segments of text across 4 different checkers: Grammarly, ZeroGPT, GPTZero, and Pangram. These segments included:
- An excerpt from one of my short stories. I intentionally chose one where I used a good amount of em-dashes and had a drier, more minimalist voice
- An excerpt from a book I'm ghostwriting on the topic of entrepreneurship
- A short story I generated using ChatGPT-5, with a prompt to imitate the voice of Edgar Allan Poe
- A blog post I generated using ChatGPT-5, with a prompt that explicitly asked it to use first-person anecdotes and avoid common AI triggers like em-dashes so it sounds as “human” as possible
- A second version of that ChatGPT generated blog post that I revised for flow and repetition (not necessarily to lower the AI score, though making something smoother to read often does that, too)
The results of this experiment were intriguing. All four checkers were pretty good at telling AI from human writing when it came to creative work. The AI-generated story got the highest AI probability scores across the board—100% from GPTZero, 99.9% from Pangram, 54.89% from ZeroGPT, and 23% from Grammarly. On the other end, my short story excerpt was marked as human-written with strong confidence by Pangram. Grammarly and ZeroGPT both said about 4% contained AI patterns, while GPTZero scored the text 94% human and 6% mixed. So overall they got it right, with the exception maybe of Grammarly missing a huge amount of AI-written text from the fake Poe story.
The professional nonfiction text got much less consistent results. Grammarly was again the most likely to miss AI-generated content. It actually said the human-written book chapter contained 3% text with AI patterns, but found no AI patterns in the fully AI-generated one (or the revised version of it).
On the other side, Pangram gave a full false positive. It splits analyzed text into sections, and was 99.5% confident that 2 of the 5 sections in the human-written book chapter contained AI content. On the plus side, it also correctly identified the AI-generated blog post with 99.9% confidence.
GPTZero performed the best at identifying AI from human. It concluded the book chapter was human-written (97% human, 2% AI, 1% mixed), pegged the blog post as 100% AI, and gave the massaged blog post a 94% AI and 6% mixed rating, so my little tweaks and edits didn't fool it much.
ZeroGPT effed up the most. It flagged 12.91% of the text in the human-written chapter as AI, but only identified 10.43% of the blog post as AI generated, and the massaged version scored a smidge better at 8.4%.
This is just one test with a very small sample size (determined mostly by the number of free credits available daily on Pangram and the character limits on free scans by ZeroGPT), but it mirrors what I've noticed using these tools for work. Grammarly is the easiest checker to trick out of finding AI text, and the least likely to give a false positive on human-written content, though it may incorrectly flag a paragraph or two. Pangram is the best at spotting AI, and usually gets it right with human-written stuff, but can give false positives so can't be taken as complete gospel. The one I trust the most at the moment is GPTZero, which performed nearly flawlessly in this test, and was the only checker to correctly identify the authorship of all 5 samples.
As far as broader insights, I think the lesson is that identifying AI text isn't an exact science, and even the tools that are made to do it aren't always right. Of course, what matters isn't whether a program thinks you're human. It's all about how your reader sees it, and the thing that really gives writing humanity is when it comes from a personal place, with authentic emotions and a distinctive voice.
See similar posts: