The Brain Is the Original AI: What Neural Networks Teach Us About Empathy, Bias & Connection
- Dre Feeney

- Oct 30, 2025
- 4 min read
Updated: Nov 13, 2025
TL;DR
Artificial intelligence runs on data, prediction, and pattern recognition, and so do we.
Our brains are the first neural networks, constantly predicting reality through the lens of our past experiences, sensory inputs, emotions, and environments.
Understanding how both AI algorithms and minds “learn” and produce outputs can help us retrain bias, deepen empathy, and reconnect with fellow humans in our modern world that's optimized for speed, not understanding.
When the Machines Started to Sound Like Us
We’ve all heard of AI.
It’s in our pockets, our kitchens, our offices.
We ask it to summarize our emails, fix our grammar, and build our resumes.
A million quiet calculations, learning from us, about us, all at once.
Our devices don’t just respond, they anticipate and predict.
But long before the machines learned to mimic our language, we were already doing this.
When you walk into a room, your mind is less like a camera capturing a scene, and more like a sophisticated AI constantly running simulations.
It studies faces and micro-expressions, measures tone and distance, reads posture, light, and silence, all in a fraction of a second.
It’s running on a network of neurons so vast that even the most advanced systems can only approximate its speed and subtlety.
Understanding how our brains learn isn’t just a lesson in neuroscience, it can be a roadmap for deeper connection.
What Is AI, Really?
Artificial intelligence isn’t magic; it’s mathematics trained to imitate intuition.
At its simplest, AI is pattern recognition at scale: systems that learn from enormous amounts of data to make predictions, recommendations, or decisions.
A neural network — the architecture behind most modern AI — is built from layers of interconnected “neurons.” Each connection strengthens or weakens based on experience, much like synapses in our own brains.
Feed it enough examples, let it make mistakes, learn from the mistakes (hopefully), and over time, it starts to predict what comes next.
Large language models (LLMs) like ChatGPT operate on this logic. They’re trained on trillions of words to predict the probability of what sentence should follow another.
If I sing, “Don’t stop…” you’ll likely finish, “believin’."
That’s prediction, your own neural network, fine-tuned by cultural repetition.
AI works the same way: input → pattern → prediction → output.
Our Brains: The Original Prediction Machines
Our brains aren’t passive observers of reality, they’re constant time travelers.
Every moment, your brain is running a simulation of what’s about to happen next.
Neuroscientist Karl Friston (2010) refers to this as predictive processing: the idea that the brain continually generates hypotheses about the world, tests them against incoming sensory data, and updates them based on errors.
Your mind builds a working model of the world, shaped by every past experience, emotion, and memory.
That model sends top-down predictions, guesses about what you expect to see, feel, or hear. Then, sensory data floods in as bottom-up input. When there’s a mismatch, your brain experiences a “prediction error” and updates its model.
Jake Greenspan describes this beautifully in Your Brain: The Time Traveler That Predicts Your Next Move (2025):
“Our brains are not passive recorders of the present; they’re dynamic, predictive machines constantly anticipating the future... less like a camera capturing a scene and more like a sophisticated AI constantly running simulations.”
That anticipatory ability (the capacity to feel ahead) is what allows humans to navigate complexity, empathize, and connect.
Training Data: Machines and Minds
Just as AI is only as good as its training data, so are we.
Feed a system biased or narrow information, and it will produce biased, narrow results.
In 2018, Amazon scrapped an experimental hiring algorithm after it systematically downgraded résumés that included the word “women’s."
The system wasn’t necessarily malicious; it was just well-trained on historical data. Its bias reflected our own (Reuters, 2018).
Our minds can fall into the same biased trap.
If we surround ourselves with sameness (people who look, think, and speak like us) our “neural network” learns to expect and prefer that sameness.
If we want to output empathy instead of judgment and assumptions, we have to learn to diversify our data.
The Everyday Practice of Sonder
Sonder — the realization that everyone around you has a story as vivid and complex as your own, is more than a poetic idea. It’s a neurological discipline.
If our brains are always predicting based on limited input, sonder is the practice of expanding and diversifying the dataset.
It’s curiosity over assumption and presence over automation.
AI engineers regularly audit their models to check for bias.
And we as humans need to do the same with our minds.
From AI to Empathy
AI might be the next frontier that everyone is excited about, but our brains are the first to do this work.
Both systems follow the same loop:
INPUT → TRAIN → PREDICT → OUTPUT.
What you feed it matters.
So ask yourself:
What data are you training your mind on?
How can I diversify my dataset?
It's important because every time you choose curiosity over certainty, you reprogram the original neural network.
With Love,
Dre



Comments