Merriam-Webster crowned "slop" its Word of the Year, defining it as digital content of low quality produced in quantity by generative AI. We all know slop when we see it. It's the movie review that opens with a compelling hook, deploys sophisticated vocabulary across six confident paragraphs, and somehow never says anything coherent about the actual film. It draws you in with the appearance of insight, keeps you reading with polished-looking sentences, but then leaves you realizing that it's not actually making sense.
But "slop" only describes the output. It doesn't tell us anything about the process that created it, or help us (and students) recognize when we're producing it. For that, we need the adjective: sloppy.
What "Sloppy" Actually Means
This isn't just wordplay. "Sloppy" carries a specific connotation that other words don't. It's not the same as "bad" or "careless" or "low-quality." Sloppy implies an avoidable mess — something made by a person who knew better, or should have, and chose not to bother. A sloppy report isn't one written by someone who lacked the skill. It's one written by someone who skipped the effort.
That distinction is precisely what makes "sloppy" the right word for the worst of the generative AI boom. The problem isn't that AI produces bad output. The problem is that people are using AI to avoid the effort that would make the output good — and then publishing the result as if the effort had been made. Sloppy AI usage is the act of substituting a prompt for the work the prompt was supposed to support.
Where Sloppiness Shows Up
Once you have this lens, you start seeing sloppy AI use everywhere — and you notice that the pattern is always the same. Someone uses AI to skip a step that shouldn't be skipped.
Sloppy sourcing is arguably the most dangerous category. Language models don't verify facts; they predict plausible next words. A 2025 study from Deakin University found that ChatGPT fabricated roughly one in five academic citations[2]. Lawyers have been sanctioned for submitting briefs full of hallucinated case law. The Chicago Sun-Times famously published a summer reading list recommending books that didn't exist. In each case, the sloppiness wasn't that AI hallucinated — hallucination is a known property of the technology. The sloppiness was that nobody checked.
Sloppy engineering follows the same pattern. AI can scaffold code and explain concepts effectively, but AI-generated code is causing problems everywhere from Amazon to the Open Source Software community. The failure mode isn't that AI wrote the code. It's that someone deployed it without the engineering discipline the code required, i.e., treating generation as a substitute for understanding.
Sloppy customer service is what happens when companies replace human support with chatbots to avoid staffing costs, then discover that the bot can't handle nuance, empathy, or edge cases.
Sloppy content is the most visible category and the easiest to spot: it leans on filler phrases, presents shallow balance instead of actual analysis, and contributes nothing that wasn't already said better somewhere else. BuzzFeed's pivot to mass-produced AI content has been accompanied by mounting financial losses and a precipitous decline in market value, with the company now warning of "substantial doubt" about its ability to continue as a going concern. The problem wasn't that AI wrote the articles, it was that nobody ensured the articles were worth reading.
In every case, the underlying mechanism is the same. AI made it possible to skip a step. Someone skipped it. The result was sloppy.
Sloppy Thinking
One category deserves its own treatment, because it's less outwardly visible and more individually consequential than the others.
When we use AI to summarize every article, draft every email, and resolve every question, we begin to outsource the cognitive work that makes us capable of doing those things well in the first place. Researchers have described this as "cognitive atrophy:" the gradual weakening of skills that aren't exercised. Ethan Mollick frames the paradox directly, stating that AI "works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles."
Sloppy thinking is the assumption that AI can do the hard work of understanding for you. It can't. It can produce text that resembles understanding, which is worse than producing nothing, since it lets you believe you've done the work when you haven't. This is the trap that makes all the other traps possible. Sloppy sourcing happens because someone didn't think critically about whether the citations were real. Sloppy engineering happens because someone didn't think carefully about whether the code was sound. The root of every sloppy AI failure is a moment where a human stopped thinking.
AI Is Not the Problem
Consider the automatic camera. Before it existed, producing a beautiful photograph required mastering the technical relationships between aperture, shutter speed, and film sensitivity--knowledge that excluded most people from the craft. The automatic camera removed that barrier. It expanded the number of people capable of capturing a striking image by orders of magnitude. But it didn't eliminate the need for the photographer. Someone still has to choose what to point the camera at, decide when to press the shutter, and recognize whether the result is worth sharing. The camera handles the exposure. The human handles the choices that reflect value (or don't!).
AI is the most powerful “automatic camera” ever built — for writing, for code, for analysis, for nearly every form of intellectual work. It can dramatically expand who is able to produce valuable output. But the value still depends on the choices a human makes before and after the tool does its part.
The Draft and the Deliverable
We wouldn't ban automatic (and now digital and smartphone incorporated) cameras because of the tsunami of low-effort photographs posted everywhere. It's a selection problem. Nor is the antidote to sloppiness banning AI. It's recognizing the difference between a draft and a deliverable. AI is genuinely powerful as a draft space: a place to explore ideas, go wide, generate options, and think out loud. The problems begin at the handoff, the moment something moves from private exploration to public use. A draft can be sloppy. A deliverable cannot. And right now, the most common form of sloppy AI usage is treating the draft as the deliverable. Publishing the first output, shipping the generated code, and sending the unedited email are all sloppy AI because the output looked good enough to skip the step where a human makes it actually good.
The question isn't whether to use AI. It's whether, at the moment of handoff, a human applied the judgment, verification, and care that the task required. If the answer is no, the result is sloppy. Skip those choices, and you get slop.