In the UK, children are finding ways to bypass online age verification by using fake information and other methods to deceive identification systems. This is according to a report by Internet Matters, an organization that researches children’s online safety.
Euronews reports.
As part of the study, 1,270 children aged 9 to 16 and their parents were surveyed. A third of the children reported that they had managed to bypass age restrictions in the past two months. Some of them used unusual methods—for example, drawing mustaches on themselves to “fool facial recognition systems.”
One mother said she caught her son doing this. According to her, the boy drew a mustache with a makeup pencil and was able to pass the verification: the system identified him as 15 years old, even though he was actually 12.
According to the report, 46% of children believe age checks are easy to bypass, while only 17% consider them difficult. Among the most common methods are entering a fake date of birth, using someone else’s documents, uploading videos featuring other people’s faces, or even using video game characters to pass the check.
Older teenagers are more likely to believe they can bypass the restrictions: 52% of children aged 13 and older find it easy, while among younger children, 41% feel the same way.
Children explain that they bypass age checks primarily to access social media (34%), online games or gaming communities (30%), and messaging apps (29%).
The study also showed that parents sometimes facilitate the circumvention of restrictions themselves. 26% of respondents allowed their children to do so, and 17% helped them directly. According to the parents, they did this if they considered the content safe. “I helped my son bypass these restrictions so he could play a game that I knew and considered acceptable,” said the mother of a 13-year-old boy.
The UK’s Online Safety Act took effect in July 2025 and required platforms to implement protective measures for children. According to the report, about 68% of children and parents noticed the new safety tools—including content warnings, reporting options, and restrictions on certain features, such as live streaming.
However, despite these changes, nearly half of children (49%) reported encountering harmful content in the past month. Specifically, this includes scenes of violence (12%), material promoting unrealistic beauty standards (11%), and discriminatory content (10%).
The report’s authors recommend integrating child protection mechanisms into digital platforms at the design stage and taking into account the risk level of each service. They also emphasize that access to online content must be appropriate for the child’s age and developmental stage, and parents should be provided with clear instructions on how to set up controls and explanations of the principles behind the algorithms that influence what children see online.