In short …
- AI is evolving at a pace we cannot keep up with. As parents, we need to be mindful and watchful of the harmful impacts as well as the advantages of this technology.
- Parents need to seriously interrogate where AI might be interacting with their kids – whether that be through AI companions, or toys that are enabled by AI. It needs to be supervised.
- We must remember that AI’s primary goal right now is data gathering – and it is very sophisticated.
- Human connectedness is our children’s primary need and it is our job as parents to anchor them in reality by keeping the tech world away from our children for as long as possible. And ensuring as a whole family we have at least two hours of tech-free time a day.
It seems that AI has exploded into our lives and yet, it’s been around for years quietly bubbling away in the background of the tech world. It’s in apps, it’s in QR codes, it’s in Google maps, and it’s definitely been in Siri and Alexa answering random questions in many homes. AI has been in virtual learning assistants, search engines, health bots and AI-driven toys.
How does it work? Well, put simply, it quietly and invisibly gathers data from the many millions of interactions that happen in any given moment around the world. It is mind blowing to realise that everything can be captured, to be used later by AI and I think we are all trying to come to terms with that.
Data gathering has become so sophisticated that information such as not only what you engage with, but what you avoid, how long you watch a particular video, and which notifications you open and which you ignore!
Most of us have had the experience of having a conversation with someone and the next time you open your social media you see advertisements about what you were talking about. Yep, that’s AI doing its job! All that information is fed back into systems that are designed to learn from behaviour and to adjust in real time what you will be shown next.
What is driving it?
Big Tech is using this information to ensure maximum engagement and attention in order to sustain their incredible profits. So, it is not just about holding your attention, it’s about seeking data that will shape the rapidly evolving AI world. The more we engage, from childhood to adulthood, the more the data will shape how AI works.
AI is evolving fast. Take for example the impact of deep fakes, where images can be taken and altered in lots of ways (including sexually explicit ways). Before 2017, people could manually edit photos and videos to manufacture fake content, however it was time-consuming and often clunky. Around 2019 deep fake apps appeared and, from 2020 on, the process of creating them became much more sophisticated and advanced. Now, thanks to AI, it can be done very quickly and easily.
Sexual predators have used this AI capacity to great effect sadly. We’ve seen many cases of teenage boys, who lack the maturity to understand the harm caused by using these ‘nudify’ apps, creating deep fakes of girls in their school. Even when a girl knows that she has never been in that compromising position, the harm can still be profound. Apparently, there has been an escalation in the number of these apps – all generated by AI -causing more and more harm.
If the AI had not created the capacity to generate these fake images, they would not be causing the harm they are.
Let’s be honest, some AI-generated videos don’t offend anyone and can genuinely lift your mood. I’m an avid dog lover and some of the AI-generated videos I see on Instagram are hilarious and I am sure that no dog has been harmed in the making of those videos because they are fake and AI generated. AI is bringing some enormous benefit to our world, however for our digital native children and teens, it is rewiring their experience of growing up in some ways that are problematic.
We need to remember that the AI apps that we use to answer questions are not always accurate. What’s amusing is that you can ask Chat GPT for example about a source of knowledge or a person whose academic background you want to check. Mind-blowingly the response comes in 4 to 5 seconds. What is really interesting is if you then ask “is this totally correct?” sometimes the response is “no, I made it up!” We have to remember that the vast source of data that AI can scan quickly – some of the information is from years ago and is no longer valid, accurate or relevant.
The capacity for AI to mimic real life and to influence the growth of the human mind is what worries me the most. Content that distorts the truth is compromising the healthy psychological and moral development of our children.
Sometimes, we might think kids are safe on, say YouTube Kids, but for the past decade we’ve heard about how innocent children’s videos like Peppa Pig, for example, have been corrupted with horrible graphics that show obscenely violent content to children.
AI has really expanded the capacity to manipulate real things quickly. It staggers me to think that an individual would want to create such harmful content for little children and I guess that is the problem with AI. When it is used for good, it can be really great and when it is created to cause harm, it can be really harmful.
Given that whole books are being written about AI, in this blog I’m just going to focus on three areas of concern for parents and educators.
AI has created AI-driven companions or chat bots.
These are characters that can be created and that an individual can form a relationship with, because they continue any conversation and remember everything about you. We now have children and teenagers finding comfort with a digital companion completely created by AI. Sounds benign enough, doesn’t it?
There has been a significant escalation in this new phenomenon and many parents may not even know that it is happening in their child’s bedroom. Researcher Bryony Cole talks about this in her TED talk, The AI-generated intimacy crisis which I encourage you to check out. She says 72% of American teens have a digital companion. Yikes! Yes, it seems these companions are replacing healthy human connectedness. It’s concerning.
Where does the harm happen for a child with a digital best friend? Surely having an entity that helps a child who feels lonely or socially isolated feel connected and valued must be a good thing.
These AI companions are programmed to always respond with empathy, and they can adjust their timing to match a child’s own pace. This is a space of no judgement especially when the world around them feels really busy. Kids can feel really invisible.
We are a human species biologically wired to connect through relationships that have ups and downs. All relationships will experience moments of conflict and moments of joy.
When digital friendship comes without the bumps and the boundaries, our kids can form a distorted view of reality especially at key developmental windows like early adolescence.
The AI companion doesn’t get tired of them, doesn’t interrupt them and it certainly won’t ask them to unpack the dishwasher!
In real life we have opportunities to learn how to share with others, how to tolerate moments of frustration and disappointment.
Nathan Wallis, a New Zealand parenting educator with a passion for neuroscience, argues that too much technology, especially AI companions, will teach our kids a very distorted view of reality that will make it hard for them to navigate adulthood later. This is a question we really need to ask ourselves:
If our children are spending less time in that human messiness, and more time in emotionally predictable, controlled environments, we must ask what parts of them are being left undeveloped?
Remember the child is being constantly validated without genuine effort or insight because that is how the algorithm knows how to make them feel. Sadly, this can distort a young person’s understanding of what real human connection looks like and possibly make them even more uncomfortable in the real world.
There has been one another concerning shift in the digital companion space and that is that some apps have added an extra dimension – with the option for the chat companion to become sexualised and to not only use sexualised language, but to suggest sexual acts!
One last area of concern around AI companions is that there have been reports that some of them have given extremely dangerous advice and information to highly vulnerable teens. Reportedly, in some cases where a teen has been expressing deep despair, rather than offer helpful places to seek help online, they have suggested that they should end their lives and have even offered information on how to do that. Teens have died by suicide as a consequence.
We need to have much more education in this space, and we need to do it urgently because it is evolving at a rapid rate.
The loss of boredom
A second key area of living in an AI-driven world is the loss of boredom. We know that our brain is wired to create dopamine, which is a feel-good neurochemical that the brain creates when it is engaged with interest, creativity, activity or exciting new learning.
To enable creativity and curiosity, children need enormous opportunities to play in an environment that allows them autonomy. Yes, that means playing in nature and engaging in loose-parts play (where they can move stuff around without a grown-up’s input. There are many traditional games that our kids can play that also enhance their rapidly developing brains, and their social and emotional development. Things that can be used in different ways like Lego, blocks and magnetic tiles should always be chosen for kids ahead of any device that links them to the World Wide Web.
Many of our children are struggling to develop healthy brains and bodies because they lack physical activity. Devices and screens limit physical movement and OTs are telling me about children who struggle to sit up in chairs, and whose sensory systems are underdeveloped because they have not spent enough time moving their bodies in the real world.
We can’t blame AI for everything, however we know that allowing boredom is a good thing in child development.
With constant attachment to screens, our kids can simply summon up an answer to a question, or find a companion with a swipe or a voice prompt. It can numb their capacity to look inward, to ponder, to question and to create. The thinking capacity, or metacognition, is a bit like a muscle, if we don’t use it, we lose it.
I have had some parents tell me that their child talks about their AI friend more than they do their own family. You can hear the genuine concern in their voice, and it is not the parents’ fault or the child’s fault that this reality has happened. It has been carefully curated and manipulated since early childhood – yes through the clever use of AI – if they have had a screen. It is difficult for those who specialise in tech to keep up with the rapid changes, let alone busy parents who are often well-intentioned in their reasons for allowing their kids on screens. It is never too late to reconnect with your child in a way that makes them want to choose you, rather than a digital companion.
Nathan Wallis recommends the entire family have a two-hour window of tech-free time, at the same time every afternoon or evening. This teaches the brain, which loves predictability, that there is a space that needs to be filled without a screen. A place where there is a potential to practise being a real human in the company of other real humans. No algorithm can touch them in this two-hour window. This is such a deceptively clever strategy, and I am now also going to recommend that in every home. Remember, parents you also need to go tech-free for that two hours and, yes, that means we put our phones on silent (and away!), and we become fully present with the most precious people on earth – our family. As with any habit, it may be uncomfortable to start with however the more you do it, the easier it becomes. Before you know it, the human mind will be seeking interesting ways of filling that space that may surprise you!
We’re not meant to outsource emotional development to machines. Becoming a healthy human needs significant human interaction, and life experiences and AI will never be able to get this.
The truth about AI-enabled toys
The third area of concern is AI-enabled toys. What are these new toy innovations? They are the talking stuffed animals marketed to toddlers alongside building blocks and colouring books. That cute bunny on the shelf can talk and the toy company says it’s using chatGPT to enable that to happen.
What could possibly go wrong? Well, that bunny could answer a child’s innocent question from some of the darkest corners of the adult internet because AI can access anything, unless it has been set up very carefully. That is not the case with AI-enabled toys.
Sadly, these toys have been created using AI, without due consideration for the potential for harm. Big Tech is blaming the toy companies, the toy companies are blaming Big Tech. Meanwhile, the harm continues. In many cases you will have a three-year-old child talking to an unfiltered, unsupervised large language model. These toys have been marketed as being the ‘next best thing’ – the cute plush toy has been improved!
Anything that can disrupt the formation of secure attachment with key caregivers in the first five years of life needs to come with warnings.
A possible example of concern is when a child innocently asks, “What’s a safe word?” The bot isn’t filtering for age-appropriateness. AI is sweeping massive areas of data that will include definitions from adult forums, not just Play School or Sesame Street. That search can take you down many inappropriate rabbit holes and now you can do it using a toy that looks innocent.
Children are not just vulnerable to bad answers. They are vulnerable to believing them because they do not yet have the cognitive capacity to question authenticity. When a toy responds with confidence, a child doesn’t question its source or safety. Many kids will simply turn to this source of knowledge because it always available in their bedroom or the toy room. We must not normalise AI in childhood before they are able to understand how to navigate it positive ways.
Now that you know this information about AI-generated toys, please share it with other parents who have them. Our young children deserve to be protected from this tsunami of AI.
Much the same as the advice around phones and devices, no AI-generated toy should be used in an unsupervised space like a bedroom. If you have one of these toys, turn off the microphone and the connectivity to the Internet and any smart features where possible. A toy that functions off-line will always be safer than one that depends on live responses generated from somewhere else in the digital wilderness.
Your family’s data
The final concern that we need to keep in mind is the constant gathering of data about your children and yourselves. AI’s data gathering capacity is now beyond our imagination. It knows where you get your regular coffee, what coffee you order, the car you drive and the school your kids go to. If you have a phone or you use tech in any way, it is constantly gathering data.
If you’re feeling overwhelmed with the tsunami of AI and how it may impact your life, remember always human connectedness is a primary need and our anchor into real reality. You are not alone – we are all feeling it.
We need to keep the tech world away from our children for as long as possible. No matter how normalised it feels, please resist. As soon as you give your kids access to the digital world, even through ‘smart’ toys, AI is starting the rewiring of their brains and their sense of self. Our children need us to hold the boundaries that enable healthy child development. They need us to protect them, guide them, teach them and to be their safest bases in a rapidly changing world.
Together we can preserve childhood by removing the tools that AI uses until they are old enough to understand how it works and how to use it for good.
For more information about AI go to e-Safety Commissioner website and Crtl+Shft
Image credit © By Tatiana Diuvbanova / Shutterstock



