Why a Social Media Ban Isn’t Enough
- Layla Foord
- Mar 29
- 3 min read
Updated: May 9
A child logging into an app isn’t making a choice. They’re entering a feedback loop designed to train their nervous system for compulsion.
But It might be the only ethical move we’ve got, until we regulate the systems behind the screen, bans might be the only tool that acknowledges the problem.

Right now, Australia is debating whether to ban social media access for under-16s. For some, this sounds like panic. For others, like progress. For me as someone who’s worked at the intersection of systems, technology, and child wellbeing. It feels like the visible part of a much deeper problem.
This isn’t just about screen time. It’s about systems that profit from emotional manipulation, delivered through algorithms that now know our children better than we do.
We are no longer dealing with platforms. We are dealing with environments.
These environments aren’t just addictive.
They are responsive.
Emotional AI, micro-targeting, pattern recognition and these aren’t speculative anymore.
They’re already shaping what kids feel, how they behave, and how they form identity.
And these systems were never built for their safety.
They’re optimised for:
Engagement loops built on mood tracking (tracking keystrokes, pace, tone and volume of discussions)
Content escalation based on emotional vulnerability
Behavioural nudges that turn late-night loneliness into revenue
Why a ban might be the only thing loud enough to interrupt the pattern
Do I think banning social media is the final answer? No. Do I think it’s an overreach? Also no.
I think it’s an attempt to press pause on a system we’ve let run too far without governance, accountability, or informed consent.
A child logging into an app isn’t making a choice. They’re entering a feedback loop designed to train their nervous system for compulsion.
Until we have better regulation and not just of content, but of design ethics, algorithmic accountability, and biometric use, a ban might be the most coherent statement a government can make.
Not “we want control.”But “this system is out of control.”
We need to stop framing this as a parenting issue.
This is not about parental screen-time discipline or which app a child downloaded.
It’s about the invisible infrastructures of tech that operate far beyond the surface of a feed or the rules of an app store.
The truth is:
Children are now interacting with emotionally manipulative AI, sometimes within minutes of waking.
They are forming identity in environments where attention is currency and mood is data.
Most adults don’t even understand what these systems do, let alone how to interrupt them.
We need to stop telling parents to set boundaries when the platforms themselves have none.
So what comes next?
Banning social media won’t fix what’s already happening. But it might create a moment of space and a pattern break that allows us to build something better.
If we’re serious, we must:
Design ethical AI tools that serve wellbeing, not attention extraction
Build parallel systems that support carers and educators, not just children
Elevate emotional literacy as a core competency, not a side project
Create adaptive, age-sensitive infrastructure, not just reactive
This isn’t just a product challenge.
It’s a generational design challenge.
We don’t need to panic. But we do need to wake up.
Because by the time we’ve perfected our digital wellbeing initiatives, the next wave of AI-driven, emotionally manipulative systems will already be in our children’s pockets or bedrooms or dreams.
Let’s be among the first to see that clearly.
And act accordingly.
Comments