The 5 Biggest ‘Tells’ That Something Was Written By AI
In the digital age, artificial intelligence (AI) has become a pervasive force in content creation, from generating articles to composing tweets. Despite AI’s increasing fluency, there are still discernible differences between human and AI-generated content. For those curious or cautious about distinguishing human from non-human work, here are the five biggest ‘tells’ that a piece of writing might have been crafted by an AI.
1. Repetitive Phrasing and Vocabulary
AI-generated text often has a penchant for repetitive vocabulary and phrases. This is because AI models, including the powerful GPT (Generative Pre-trained Transformer), rely on statistical likelihoods and may repeatedly use words or phrases that are considered ‘safe’ choices. Humans, in contrast, are more likely to introduce a broader variety of vocabulary and synonyms, reflecting the natural creativity and nuance of human language.
2. Awkward Syntax and Grammar
AI systems sometimes generate sentences that either lack nuance or feel awkwardly constructed. They may adhere strictly to grammatical rules to the detriment of readability and natural flow. Human writers, meanwhile, might play with sentence structure for effect or ease of reading, something AI struggles to replicate effectively.
3. Lack of Deep Context or Generalizations
AI-generated content tends to struggle with deep contextual understanding and may produce generalized statements that don’t add much value or insight. This occurs because, although AI can simulate understanding by correlating data points from its training, it doesn’t ‘understand’ content in the way humans do. For example, an AI might write about a historical event accurately describing dates and figures but fail to capture the underlying human conflicts or the emotional weight of the event.
4. Overuse of Commonplace Ideas
An AI’s training involves vast swaths of existing text, leading it to often produce content that leans towards the generic. For instance, in writing about innovation, an AI might repeatedly stress clichés such as “thinking outside the box” without providing concrete examples or original insights. In contrast, human writers are more likely to include unique perspectives or challenging arguments based on personal knowledge or new research.
5. Error in Factual Details or Logical Inconsistencies
Finally, because AI constructs its understanding based on patterns in data, it can easily make mistakes in facts or display logical inconsistencies, especially when writing about niche or less well-represented topics. Humans, especially experts in a particular field, can pick up on nuances and errors that might not be evident in the data alone.
Ensuring Authenticity in the Age of AI
With AI becoming an ever more capable companion in the realm of writing, detecting its handiwork is becoming increasingly subtle. For industries ranging from journalism to academia, the ability to discern the origin of text—AI-generated or human-written—is crucial, impacting everything from credibility to ethical considerations.
Engaging with these ‘tells’ aids not just in authentication but also in guiding how we interact with and regulate digital content. As AI continues to evolve, so too will the methods for distinguishing it from human creation, promising a fascinating ongoing dialogue between human ingenuity and artificial intelligence.






