What are screens training us to become?
We tend to talk about screens in terms of outcomes.
Do they improve learning?
Do they hurt attention?
Are they productive or distracting?
Those questions sound reasonable.
But they assume the effects are obvious, and external.
They're not.
The more interesting question is quieter.
What kind of minds do screens produce?
Not what they promise.
What they train.
If you've never felt uneasy about your attention, this piece probably isn't for you. But if you've noticed something shifting, if you've felt the drift, this might help you name it.
"Does it work?" isn't the right question
Neil Postman had a talent for noticing something most of us miss: technology doesn't just add new capabilities. It quietly changes what counts as normal. It changes the criteria. It changes what we even mean by words like learning.
In Technopoly(written in 1992), before smartphones, before social media, before the feed, he described what he called the great symbol drain, the slow leaking out of meaning from the ideas that once held a culture together: education, truth, tradition, even purpose.
Postman was writing about television and bureaucracy. But the tendencies he identified haven't faded. They've intensified, and become personal.
What replaces those heavy symbols isn't usually a new ideology.
It's something more boring: efficiency, engagement, convenience, measurement.
This is how "progress" arrives when a culture loses patience for first principles.
And learning is one of the clearest places to see it.
If you ask why new tools get introduced, the answer is almost always the same:
to make things more efficient and more engaging.
Postman's point wasn't that those goals are evil. It was that they're incomplete. They describe how we do something, not why it matters.
Efficiency is a method. Interest is a lever. They're accelerants.
But accelerants without direction don't always take you somewhere better. They just take you somewhere faster.
We're running an experiment with no control group
One of the few places we can begin to observe the cognitive effects of screens is in classrooms, because learning there is measured. And the picture that emerges is uncomfortable. Not because it’s damning, but because it’s murky.
Cognitive neuroscientist and former teacher Jared Cooney Horvath has testified that when digital technology becomes the default medium for learning, outcomes often suffer. Not because teachers are failing or software is poorly designed, but because human learning is biological and social. We learn best from other humans. Screens route around many of the mechanisms that make learning stick.
The exact numbers are contested. Horvath's viral statistics circulate through clips and summaries more than clean citations. The OECD's Students, Computers and Learning (2015) found that simply adding computers to classrooms did not improve learning, and in some cases correlated with worse performance. More recent PISA analyses land on the same uncomfortable shape: the relationship isn't linear. More technology does not automatically mean better learning.
Some researchers argue the effects are overstated; that screen time panic is the new moral panic, and the data doesn't support the alarm. They may be right that the loudest claims are exaggerated. But "not as bad as the headlines" isn't the same as "fine." And the absence of definitive harm isn't evidence of safety. It's just absence.
Here's what makes this genuinely strange: we're in the middle of one of the largest uncontrolled cognitive experiments in human history, and we can't quite agree on what's happening.
The research is inconclusive not because the question doesn't matter, but because cognition is hard to measure, causation is hard to isolate, and the technology changes faster than studies can track it.
We don't pause pharmaceutical trials because early results are ambiguous. We proceed with caution. The same logic applies here, except no one is running the trial, there's no control group, and the subjects are everyone.
The tool doesn't just support learning, it reshapes it
The deeper issue isn't distraction.
It's definition.
When a tool becomes dominant, it doesn't just help you do the thing. It starts to redefine what the thing is.
If your primary medium rewards scanning, clicking, switching, and extracting answers quickly, then learning, over time, begins to bend toward those behaviors.
You stop asking: What kind of mind should this cultivate?
And start asking: How do we make this faster, smoother, more engaging?
That shift seems harmless until it spreads.
Once a tool is central, you don't just adopt it, you adopt its assumptions. Learning begins orbiting the values the medium is naturally good at: speed, responsiveness, modular tasks, quick feedback, easy measurement, constant interaction.
None of these are inherently bad.
But they are not the same thing as depth.
The measurement trap
This is where the conversation usually breaks down.
Postman warned that one of Technopoly's quiet moves is to treat measurement as truth, as if numbers arrive with an aura of objectivity that exempts them from interpretation.
But cognition isn't a clean object.
Attention spans aren't a single stable unit.
Learning isn't one thing.
Intellect isn't a meter reading.
So when people argue about whether attention is "declining," they get stuck in familiar loops: Which test? Compared to when? What about stress, sleep, economics? Is it causal or correlative?
All valid questions.
But they can also become a way to avoid the more personal one:
What do you notice happening inside your own mind?
This isn't a substitute for evidence. But you don't need a study to notice what's changed in your own mind, and waiting for definitive proof while the experiment runs on you isn't neutrality. It's a choice, whether we frame it that way or not.
Skimming isn't reading, and we're all practicing it
One of the most important shifts of the screen era isn't "more screen time."
It’s the behaviour itself: scanning, hopping, checking, switching, half-reading, never fully landing.
Short-form video didn’t invent this on its own. It’s too neat to blame one format. Life is chaotic: sleep debt, stress, information overload, loneliness, endless tabs, constant notifications.
Causation is hard.
But training is easier to see.
The mind becomes what it repeatedly does.
And modern interfaces rehearse fragmentation relentlessly.
Even when the content is "educational," the medium often pushes the same behaviors: quick hits, low friction, constant novelty, immediate feedback, constant opportunity to abandon.
You may recognize it physically. You start a long piece, hit a dense paragraph, and your hand drifts toward a new tab before you’ve consciously decided anything.
Over time, sustained reading starts to feel like swimming in jeans.
Not because you've become weak.
Because you've been conditioned.
So… what are devices doing to us?
At some point, the debate stops being theoretical.
You can argue about studies forever. You can debate causation and confounds.
But it doesn't change the fact that most of us can feel something shifting in how we think.
So the questions that matter aren't: Are screens good or bad? or Should schools ban devices? or Is TikTok evil?
Those questions assume the effects are obvious.
They aren't.
A more honest question is:
What kind of person does this environment make it easy to become?
And what kind of person does it make difficult?
If the environment optimizes you for speed, novelty, and constant switching, it shouldn't surprise us when deep work feels harder, boredom feels unbearable, reading feels hostile, thinking feels like wading through fog.
None of that is a character flaw.
It's adaptation.
It's design.
Who this is for
People use screens for different reasons, under different pressures. Sometimes scrolling is rest. Sometimes it's relief. Sometimes it's just a pause.
That isn't the problem.
The problem is narrower: the gap between how you intend to use your time and how it actually gets spent. If that resonates, this might help.
Right now, Algoasis works only on Safari, Mac, iPhone, and iPad. Not because other platforms don't matter, but because they impose limitations that make this kind of control unreliable. We want this to be broader. For now, Safari is the only place we can build it properly.
Why Algoasis exists
Algoasis isn't built on the belief that screens should disappear.
It's built on the belief that the default environment is training us, whether we notice it or not.
So the goal isn't moral purity.
And it's not just nostalgia.
It's making depth possible again.
Not through willpower. Through constraints.
Because if skimming becomes the default, the real risk isn't that we lose time.
It's that we forget how to use it.
References
- OECD — Students, Computers and Learning: Making the Connection (2015)
- OECD / PISA reporting on tech use and learning outcomes
- U.S. Congressional testimony featuring Jared Cooney Horvath's classroom-tech claims
- Technopoly: The Surrender of Culture to Technology — Neil Postman