Last Updated on 19th August 2025
Header Image ©yoti.com
Reading Time: 4.3 mins
Published: August 19, 2025
In July 2025, the UK’s Online Safety Act came into force with platforms being required by law to protect children and young people from being exposed potentially harmful content. One of the biggest changes with this is the introduction of age checks for online users seeking to access adult content. This means platforms must take reasonable steps to ensure users are the required age to access certain content and services – and a simple ‘tick box’ confirmation is no longer adequate.
What is Yoti?
Yoti uses facial analysis technology to verify a user’s age by asking the user to take a photo ‘in real-time’ and uses algorithms to verify that it is a photo of a real person. The algorithms read the pixels of the image for age identifiers (e.g., wrinkles, grey hair), but does not ‘recognise’ the image as a face.
Yoti’s software is used by many popular platforms, including Instagram, Spotify and even OnlyFans.
Some users may have privacy concerns over Yoti, especially if they’re using the app as a central method of ID and wonder if it is safe to use. Yoti’s facial age estimation is built in accordance with the ‘privacy by design’ principle in the UK (GDPR) and the photo is deleted from the system once analysed.

What’s the Difference Between Facial Analysis and Facial Recognition?
Whilst often used interchangeably, facial analysis and facial recognition are two different software types with different purposes. Yoti has previously emphasised that their technology is facial analysis software, not facial recognition.
Essentially, this means that their software is looking for a face within an image and then analysing it for whatever information it needs (in this case, that’s the age of the subject), rather than trying to identify and collect information on the person within the photo.
There are several potential reasons why this type of age verification is important:
Top Tips
Have Open Conversations
Consistently initiate conversations about their online activity and ask open ended questions to leave room for them to speak their mind. Ask questions like, “who do you like talking to online?” and “what do you like to do when you’re online?”. This will give them the opportunity to share their experiences without feeling judged.
Ensure they know who their trusted adults are if they need to discuss something they have seen online that is harmful or has made them uncomfortable.
Discuss Age Verification
It is likely that the child or young person in your care will come across a platform that asks them to verify their age. Sit with them and talk them through the process, helping them understand why this has been put in place. This makes it more likely they will come to you if they need help with this in the future.
Monitor Their Online Activity
It’s important to be aware of what platforms the child in your care is using without invading their privacy. Explain to them if they want to use a new platform, they should ask permission from their parent or carer first. If the platform’s age restriction is 18+, we would strongly advise not allowing them access.
Utilise Parental Controls
While they can’t ensure that a child won’t encounter harmful content, safety settings will minimise the risk and put measures in place to make a platform more age-appropriate and safer.
Explain to the child or young person in your care why you have chosen the selected restrictions and come to an agreement as this will help them feel involved and in control. If they feel that you have breached their privacy and taken over their device without consent, it may lead them to lie about their online habits.
As the child or young person gets older, you may wish to alter the restrictions, so review them frequently.
You can use Our Safety Centre for help with platform-specific settings.
Further Resources
Join our Online Safeguarding Hub Newsletter Network
Members of our network receive weekly updates on the trends, risks and threats to children and young people online.