AI only knows what we’ve told it.

AI Only Knows What We’ve Told It: The Smartest Parrot in the Room

Imagine a parrot. A really, really smart parrot. One that’s read all of Wikipedia, all of Reddit (yes, even the cursed parts), and every recipe for banana bread ever written. That’s AI. It’s the know-it-all friend at trivia night who’s never actually lived a day outside the basement — but will still argue with you about how to parallel park.

The key thing to remember is this: AI only knows what we told it. It’s not magic. It’s not psychic. It’s not plotting to take over the world (unless we train it on too much dystopian sci-fi). Let’s break it down, with a few laughs and a few experts along the way.

AI: Trained on Everything We’ve Ever Said (Even the Dumb Stuff)

Artificial Intelligence — specifically large language models like ChatGPT — is trained on vast amounts of human-created text. Books, articles, forums, Wikipedia pages, and yes, probably that one time you rage-posted a Yelp review at 2am.

Dr. Emily Bender, a computational linguist at the University of Washington, famously compared large language models to "stochastic parrots" — statistical machines that mimic language without understanding it. They don’t think, they predictwhat word probably comes next based on patterns in the data.

“These models are very good at giving you what looks like coherent text, but they don’t have any grounding in the world.” — Dr. Emily Bender

Translation: your AI assistant can sound like it understands quantum physics, but it doesn’t know a quark from a corn chip.

Garbage In, Garbage Out (Or: Why AI Can Be Confidently Wrong)

Because AI only knows what it's been trained on, it's also limited by that data. If we feed it biased, outdated, or just plain dumb information, the AI will confidently regurgitate those same things.

This is why AI might:

  • Give you a historically inaccurate summary of World War II,

  • Recommend putting glue in your pizza crust (please don’t),

  • Or invent a completely fake person named "Harold Butterflap" as the 8th President of the United States.

And yet it will say it all with the confidence of a college freshman who just read half of one book.

As AI ethicist Timnit Gebru warns,

“These systems don’t understand what they’re saying, and that makes them dangerous in high-stakes applications.” — Dr. Timnit Gebru

Like that time an AI told someone their symptoms were "100% consistent with lycanthropy." Again, it only knows what we taught it — and we taught it everything, including werewolf forums.

AI Has No Original Thoughts — And That’s the Point

AI isn’t supposed to invent knowledge. It’s not supposed to "know" in the human sense. It doesn’t wake up in the morning and decide to write a novel (yet). It doesn't have desires, beliefs, or even taste in music (though it will confidently recommend a Nickelback playlist if the training data suggests it).

AI is a remix artist, not a poet. It’s the Weird Al of information — clever, fast, and surprisingly helpful, but entirely derivative.

So… What’s the Point Then?

AI is brilliant at making connections between things we already know. It’s like having an assistant with photographic memory and no ego. It can summarize 20 papers in 10 seconds, write a blog post in the voice of a pirate, or generate 300 cat names based on Norse mythology. (You're welcome, Loki Whiskers.)

But if we ask AI a question that has no good data, it won’t meditate under a tree and achieve enlightenment. It’ll just make something up that sounds right.

And the only reason it can do any of this? Because people — smart, weird, brilliant, opinionated people — put all this knowledge out there in the first place.

Final Thought: We’re Teaching AI to Be a Very Fancy Mirror

So next time your AI assistant says something absurd, remember: it's just reflecting us back at ourselves — every Reddit thread, every tweet, every 1998 blog post about alien encounters and Y2K.

AI is humanity's greatest echo chamber. It’s not a god. It’s not a demon. It’s a glorified autocorrect with a degree in mimicry.

Let’s just make sure we keep feeding it good stuff. Otherwise, don’t be surprised when it tells you to season your risotto with glitter glue and vibes.

 TL;DR

AI only knows what we’ve told it. It’s brilliant at remixing, terrible at understanding, and completely dependent on the quality of human input. Basically, it's like a really enthusiastic intern with no life experience and access to the internet.

So let’s train it well — or prepare to meet Harold Butterflap, werewolf president.

Previous
Previous

GOOCHI

Next
Next

That kid isn’t yours