top of page
Spark Space

Banksy didn't do this one - 'AI Hiding in Denial'

AI hides in the word denial


Sorry Banksy, I hope you appreciate the homage and thank you for the inspiration. If anyone can help raise awareness of the dangers of AI and artists image, it's you.


I made this image using Google’s Gemini and its little “NanoBanana” assistant. I’d just noticed that AI is sitting right there inside the word Denial, hiding in plain sight and I thought it would be fun to visualise. I tried to generate it in ChatGPT… but it wouldn’t let me. Over and over, it tripped guardrails I hadn’t expected.


Gemini didn’t hesitate for a second.


And that contrast stopped me. Not because of my worry about Banksy being annoyed I'd used his style to bring an idea to life. But because it revealed something much bigger underneath.

If one system refuses a harmless idea for safety reasons, and another allows it without a blink, what else is slipping through the gaps? For example, Gemini will let you upload a photo of a real person and place them into entirely different situations, clothes, or bodies. Seamlessly. No friction. No ethical prompt. No “are you sure?”.


As a mother and as someone working in tech, education, and mental fitness that unsettled me far more than the graffiti ever could.


It reminded me of a recent conversation where Bernie Sanders asked an AI researcher a blunt question:If AI is transforming everything, who is actually steering it, and are they thinking about whether you and I get a better life out of it?

The honest answer was… uncertain.

I’m not anti-AI.I work with it every day, I build with it, I teach with it.The potential upside is enormous like medicine, learning, creativity, accessibility, whole new ways of expressing what it means to be human.

But the safety gap between platforms is widening fast.And the people most at risk are the ones who won’t see the difference: children, young people, and anyone who assumes all AI tools behave the same.

This little graffiti experiment made something click for me. I can’t unsee it now. So here’s the question I’m left with, and I genuinely don’t have the full answer:


How do we influence the safeguards of the tools that are already shaping our world?


And if you (or your kids) are using AI, how do you choose the systems that are actually designed to keep them safe? If you're involved in this space, I'd love to talk to you about it.

-Layla

Comments


Subscribe to Spark Space

bottom of page