What are screens training us to become?
There are several common themes that emerge when we discuss devices, distracting platforms, and the endless feeds:
Do they affect our attention spans?
Do they affect learning?
Are they good or bad?
Not to say that these questions aren’t important, but they assume the effects are obvious and external, which they are not.
A more important question is: What kind of minds do screens produce?
Not what they promise, but what they train us to become.
If you’ve noticed it’s becoming harder to pay attention when using devices, this article may help you understand it.
When technologies provide us with new tools, they also change what we value. In “Technopoly,” Neil Postman wrote about something he called “the great symbol drain,” which is the loss of meaning from the big ideas that hold culture together: education, truth, tradition, and purpose. Postman wrote this before the smartphone, social media, or the feed. He was writing about television and bureaucracy, but what he described back then has only intensified.
When we stop asking basic questions like “what is this for?” or “what purpose is this meant to serve?”, we focus on what’s easy to measure, like efficiency, engagement, convenience, and metrics. As new tools are introduced to us, if we ask “why?”, the answer is usually: to make things more efficient and more engaging. These goals are not bad, but they’re incomplete. They describe how we do something, but not why it matters. Efficiency is a method, and interest is a lever. They’re accelerants. But accelerants without direction do not always take you somewhere better, they just take you there faster.
One of the places we can observe the cognitive effects of screens is in the classroom, because learning is measured there. And the picture that emerges is uncomfortable, not because it’s necessarily damning, but because it’s unclear.
Cognitive neuroscientist and teacher Jared Cooney Horvath has testified that when digital technology becomes the default medium for learning, outcomes often suffer. Not because teachers are failing or software is poorly designed, but because human learning is biological and social. We learn best from other humans. Screens bypass many of the mechanisms that make learning stick.
The exact numbers can be disputed. Horvath’s statistics circulate through clips and summaries more than clean citations. The OECD’s Students, Computers and Learning (2015) found that adding computers to classrooms did not improve learning, and in some cases correlated with worse performance. Recent PISA analyses also show the same uncomfortable pattern: the relationship is not linear. More technology does not automatically mean better learning.
Some researchers argue that the effects are overstated; that screen time panic is the new moral panic, and the data doesn’t support the alarm. The loudest claims may be exaggerated, but “not as bad as the headlines” isn’t the same as “fine.” And the absence of definitive harm isn’t evidence of safety.
Here’s what makes this situation genuinely strange: we’re in the middle of one of the largest uncontrolled cognitive experiments in human history, and we can’t quite agree on what’s happening. Cognition is hard to measure, causation is challenging to isolate, and the technology is changing faster than any studies can track it. Would you expect us to pause pharmaceutical trials because the early results were ambiguous? No, we would proceed with caution. I don’t understand why the same logic doesn’t apply here. No one is running a trial, there is no control group, and we’re all the subjects.
Postman warned that one of the characteristics of Technopoly is to treat measurement as truth, where numbers arrive with an aura of objectivity that exempts them from interpretation. But cognition is not a clean object, and attention spans are not a single stable unit. Learning is not a single thing, and intellect is not a meter reading.
So when people argue about whether attention is “declining,” they get stuck debating common questions: like which test? Compared to when? What about stress, sleep, economics? Causal or correlative?
While these are all valid questions, they can become a way to avoid the more personal one:
What do you notice happening inside your own mind?
I’m not saying this should be a substitute for evidence, but you don’t need a study to notice what’s changed in your own mind, and waiting for definitive proof while the experiment runs on you isn’t neutrality. That’s a choice, whether we frame it that way or not.
The deeper issue at hand isn’t distraction. When tools become ubiquitous, they don’t just help you complete the task; they start to redefine what the task even is. So, if the medium rewards skimming, clicking, switching, and extracting answers quickly, then we, over time, begin to adapt towards those behaviors.
Instead of asking, “What kind of mind should this cultivate?”, we ask, “How do we make this faster, smoother, or more engaging?”
None of these are inherently bad, but they are not the same thing as depth. And scanning, hopping, checking, switching, half-reading, never fully landing, all leave us feeling a sense of low resolution, like a question that remains unanswered. The mind becomes what it repeatedly does, and modern interfaces relentlessly promote fragmentation. Even when the content we are viewing is “educational,” the medium pushes the same behaviors.
You may have noticed this change in yourself. Have you been reading an article, hit a dense paragraph, and found your hand drifting toward a new tab? This is because the environment has conditioned us. If sustained reading starts to feel like swimming in jeans, it’s not because you’re weak but because you’re adapting to the environment.
So, what are devices doing to us?
At some point, the debate stops being theoretical.
We can argue about studies forever, and debate whether there is causation or correlation.
But none of this changes the fact that most of us can feel something shifting in how we think.
Instead of asking: Do devices affect our attention spans? Should schools ban devices? Are screens good or bad? These assume the effects are obvious, which they are not.
We should ask: what kind of person does this environment reward?
Because if our environments are optimized for speed, novelty, and constant switching, then when deep work feels challenging, boredom arrives often, and thinking is a real struggle, we shouldn’t be so surprised. It’s by design; you’re adapting to the environment.
We know people use screens for a wide variety of reasons. Some people might not feel they have a problem. For some, scrolling is a rest, relief, or just a pause. That’s not the problem we’re talking about. The problem is much narrower: it’s the gap between how you intended to use your time and how it actually gets spent. If you’re experiencing this, we recommend trying one of the many solutions out there. I’ve written an article about options here.
We’re building to explore all of these ideas in practice, called Algoasis. Right now, it works only in Safari for Mac, iPhone, and iPad. But not because other platforms don’t matter, they just impose limitations that make this kind of control unreliable. We want this to be broader, but for now, Safari is the only place we can build it properly.
We’re not saying you should be spending less time on screens, but the default environment is training us. And we’re not here to promote moral purity, or attempt to live backwards because of nostalgia. We want to help make depth possible again, by creating tools that sit between you and the environments that are optimized for skimming, and short content consumption.
The biggest risk isn’t that we lose time.
It’s that we forget how to use it.