Why is AI slop so easy to spot but hard to detect?

Recently, my employer posted a story to LinkedIn about our Copilot product. It was a typical release announcement about new features and a link to the company blog. Shortly after posting, I received a Slack message from our Marketing team:

"Hey Matt, can you please take a look at this comment on LinkedIn? Seems like something you should be able to answer."

I took a peek, and the commenter asked a vaguely-related AI question about the post. Something about it didn't pass the sniff test, so I opened the user's profile. I saw comment after comment after comment, mere minutes apart, on every public AI-related post on LinkedIn. It didn't take long to conclude the account was just peddling AI slop.

It's funny, AI detectors don't work (yet), but I know it when I see it. I don't know how long it took this person to write that simple script, but for me, it ended up wasting time and effort for several people, only for us to ultimately remove the comment. I felt bad seeing other people engaging seriously with the account in other posts.

But what about the AI generated content we don't recognize? With the new o1 or DeepSeek R1 models out there, are we just a couple prompt-tweaks away from not noticing AI slop anymore? 

Anyway, congrats to this guy for recognizing the slop for what it is!

Comments