How to Spot AI-Generated Nonsense (Before It Ruins Your Reputation)
Your LinkedIn feed is flooded with perfect-looking infographics. Your email inbox contains compelling case studies with suspiciously polished data.
That industry report everyone’s sharing has all the right buzzwords but something feels… off.
I’ll be honest: I don’t have all the answers here. This whole landscape is shifting faster than any of us can keep up with, and I’m figuring it out alongside everyone else. But I’ve started noticing patterns, and I’ve definitely made some mistakes that taught me what to watch for.
The thing is, most people think they need to become digital forensics experts to avoid getting fooled. That’s not realistic for busy professionals who have actual work to do. What I’ve found works better is developing better information instincts and being okay with saying “I’m not sure about this source.”
The stakes feel higher now than they used to be. When everyone’s sharing the same compelling statistic or study, there’s pressure to jump on board quickly. But I’ve seen people reference data that turned out to be questionable, and the awkward follow-up conversations aren’t fun for anyone.
What I’ve Learned to Watch For
Research That’s Almost Too Good
Sometimes I’ll come across a study that perfectly supports whatever point I want to make. The numbers are clean, the conclusions are clear, and everything aligns beautifully with my existing beliefs.
That’s exactly when I’ve learned to pause.
Real research tends to be messier. It has limitations sections that actually list limitations. The sample sizes aren’t perfectly round numbers. There are usually caveats or areas where the researchers acknowledge uncertainty.
I’m not saying perfect-looking research is automatically fake though! Sometimes legitimate studies do have clean results. But it’s worth taking an extra minute to check if other credible sources are citing the same work.
Content That Feels Like It Was Written by Committee
This one’s trickier because AI writing has gotten surprisingly good. But there’s often something that feels… generic about AI-generated content. It hits all the right points but lacks the weird little details that make human writing interesting.
When I read something that feels comprehensive but somehow bland, I find myself wondering if the author actually has experience with what they’re describing. Do they mention unexpected challenges? Do they admit when something didn’t work as expected? Or does everything sound like it went exactly according to plan?
I’m still learning to trust this instinct, and I’m probably wrong sometimes. But it’s become one of my informal filters.
Visual Content That’s Suspiciously Perfect
I’ll admit I’m not great at spotting sophisticated deepfakes or AI-generated images. The technology has gotten too good for me to rely on visual analysis alone.
What I do check is context. If someone is sharing multiple high-quality images from different events or locations, especially if they’re the only source for these images, that raises questions for me. Legitimate news photos usually appear across multiple outlets, often with photographer credits.
My Imperfect Verification Process
I’ve developed what I call a “good enough” verification habit. It’s not foolproof, but it catches most of the obviously questionable stuff without taking forever.
Before sharing anything significant, I try to answer: Can I find this information from at least one other source I recognize and trust?
Sometimes the answer is no, and that’s okay. I’ll either skip sharing it or add a qualifier like “I’m not certain about this source, but the point is interesting.”
I keep a mental list of publications and authors I generally trust in my field. When something big is happening, I check how these sources are covering it. If they’re not covering it at all, or if their take is significantly different, that tells me something.
Ultimately, it’s all about having reliable starting points when I need to verify information quickly.
The Tools That Actually Help
I’ve tried various fact-checking websites, but honestly, I don’t always remember to use them. What works better for me is bookmarking a few go-to sources and checking those when something seems important.
For academic research, I’ve learned to look for DOI numbers and try to find the original paper rather than relying on summaries. This doesn’t always work because sometimes papers are behind paywalls or written in technical language I don’t fully understand. But at least I can see if the research actually exists.
And then there is the Deep Research features that most AI chatbots offer these days. You can just ask it to research the topic and it will not only give you a very comprehensive report, but more importantly, it will list all the websites it researched to create that report, so you can double-check the sources yourself.
I also heard that Google Scholar is more useful than regular Google for checking if studies are real and being cited by other researchers. I haven’t tried it myself yet, but I thought I’d share it with you as well.
What I’m Still Figuring Out
Social media verification is where I struggle most. It’s increasingly difficult to tell which accounts are real people versus sophisticated AI-powered accounts. I’ve started paying more attention to posting patterns and whether the content feels like it comes from someone managing other life responsibilities, but this feels like guesswork.
I’m also not sure how to handle situations where the information might be accurate but the source is questionable. Sometimes valuable insights come from unexpected places, and I don’t want to dismiss everything that doesn’t come from traditional authorities.
The balance between healthy skepticism and productive paranoia is something I’m still working on. There’s a point where questioning everything becomes paralyzing, and I haven’t figured out exactly where that line is.
A Simple Starting Point
If you’re feeling overwhelmed by all this, here’s what I’d suggest starting with: Next time you’re about to share something that could impact how people see your professional judgment, take thirty seconds to ask yourself if you can verify the source.
If you can’t, that doesn’t necessarily mean don’t share it. But maybe add a qualifier acknowledging that you haven’t verified it independently. Something like “I can’t confirm this source, but the perspective is worth considering.”
Most people will respect the honesty, and you’ll protect yourself from the embarrassment of sharing something that turns out to be fabricated.
The Ongoing Challenge
This whole situation feels like trying to hit a moving target. The technology keeps improving, the bad actors keep getting more sophisticated, and the volume of information keeps increasing.
I don’t think any of us will become perfect at spotting fake content. But I do think we can get better at acknowledging uncertainty and being more thoughtful about what we amplify.