The Alan Turing Institute has called for improved policy frameworks and greater industry action to protect children's wellbeing when interacting with AI tools, following research revealing that 22 per cent of children aged 8-12 are using the technology.
The data science and AI institute said that despite being amongst those most likely to be significantly impacted by the technology over their lifetimes, children are the least represented in decision-making processes about its development, use and regulation.
The research employed a range of quantitative and qualitative methods, including workshops and surveys. Of those children who use generative AI (genAI), the study found around 40 per cent use it for creative tasks such as making fun pictures, finding out information or learning about something, and for digital play.
The research revealed that when it comes to play and creativity, children have a strong preference for tactile, offline art materials over genAI. During workshops conducted in collaboration with Children's Parliament, children aged 9-11 expressed a range of concerns about generative AI, including its production of biased and unrepresentative outputs and its environmental impacts.
The Institute said children in the study wanted policymakers and industry to take action to address these areas. Surveys conducted by the Institute found that 82 per cent of parents are fairly or very concerned about their children accessing inappropriate information when using genAI tools such as ChatGPT.
In school-based workshops, the Institute found that innocent prompts given by children often led to inappropriate or potentially harmful images being generated. According to the survey, 76 per cent of parents also reported worrying about genAI's impact on their child's critical thinking skills, whilst 52 per cent of teachers said they are concerned by the increase in AI-generated work being submitted as the child's own.
The Institute said that whilst there is scope for genAI to benefit children, there is a need for them to be more involved in conversations surrounding the technology and its regulation. The researchers made several recommendations to help industry and policymakers, including seeking children's perspectives and improving AI literacy as part of the wider curriculum.
Additionally, they urged developers to consider how their tools could impact children even if these tools are not intended to be used by them in the first place.
"Children's experiences with this technology are significantly different from those of adults, so it is crucial that we listen to their perspectives to understand their particular needs and interests," said Dr Mhairi Aitken, senior ethics fellow at the Alan Turing Institute. "In doing so, we can mitigate the risks and enable age-appropriate generative AI tools to be developed safely to provide benefits to the young people who use them."
Recent Stories