Loading...

Instagram’s Latest Updates

2022-07-07T11:41:31+01:00

Last Updated on 7th July 2022

Share this with your friends, family and colleagues

As we get closer to summer holidays, many platforms have begun to release more features and updates that claim to help parents and carers protect younger users. Instagram is the most recent to do this, with many features building on its promise to improve the impact the platform has after significant negative press 

According to Ofcom, over 80% of young people use social media apps and platforms every single day. With Instagram being one of the most popular, it’s important to be mindful of every update. We’ve taken a look at Instagram’s latest updates to help parents, carers, and safeguarding professionals be aware of the safety features available to them on the app – and to know how effective these settings actually are. 

1. Parental Controls

What is it?

Instagram has been releasing different types of parental controls since the start of the year, and has finally released its promised ‘Supervision’ tools. This new set of safety settings was launched in the UK on June 14th, 2022.

How it works

Before parents can implement Instagram’s Supervision feature, an ‘invitation link’ must be sent to the young person’s account. It is the young person’s decision to accept – a parent or carer cannot enforce this without their permission. To ensure you maintain a healthy relationship and ensure boundaries remain in place, discuss this option with the young person in your care first.

Supervisee ScreenSupervisor Screen

Instagram’s Supervision features Include:

Parents will be able to see their young person’s daily time spent on their linked Instagram account. They will also be able to set time limits between 15 minutes and 2 hours. After this limit is reached, a black screen appears on the account for the rest of the day saying they can ‘come back tomorrow’.
This will send ‘take a break’ reminders at intervals of 10, 20, or 30 minutes (depending on parental preference) to remind a young person to have a break from their screen. 
If a young person reports an account or post, a notification will be sent to their supervising account with details of the report (including the reported account/post and why it was reported). 
All the accounts connected to the young person will be visible to their parent or carer, starting with the most recent. They will be able to see how many accounts their young person is following and how many accounts are following them, as well as being able to view any individual profiles.

Instagram’s Supervision features do not allow parents or carers to:

  • Access sent or received private messages.
  • Block or restrict access to specific accounts or topics.
  • View search, browsing, or in-app activity history.
  • Edit or delete posts made by other accounts.

Areas of Concern

Potential Risks

2. Sensitive Content Control

What is it?

The Sensitive Content Control feature allows users to have more power over the type of content they see on Instagram. While this was previously limited to the ‘Explore’ page, it has now been extended to wherever Instagram makes recommendations (such as the ‘Search’ and ‘Hashtag’ pages, as well as a user’s personal feed). Instagram are planning to release a shortcut to this feature on the ‘Explore’ page as well as a ‘Not Interested’ shortcut to allow users to quickly inform platform algorithms of the content they do not wish to see (but these shortcuts have not been given a release date). 

How it works

Sensitive content is defined by Instagram as “posts that don’t necessarily break our rules, but could potentially be upsetting to some people.” This includes content that is violent, sexually explicit/suggestive, or promoting regulated drug/alcohol products.

There are three options for the ‘level’ of sensitive content that can appear. These options have been renamed to help users better understand how this filtering system works. 

  • More (previously ‘Allow’) – Users may see more photos and videos that could be inappropriate and/or offensive for younger audiences. This option is not available to users under the age of 18.
  • Standard (previously ‘Limit’) – Users may see some photos and videos that could be inappropriate and/or offensive for younger audiences. This option is the default setting for all users.
  • Less (previously ‘Limit Even More’) – Users may see fewer photos and videos that could be inappropriate and/or offensive for younger audiences. This option is recommended for users under the age of 18.

Areas of Concern

Potential Risks

3. In-App Nudges

What is it?

This new feature is designed to help encourage young people using the platform to ‘discover something new’ while also excluding topics that may be associated with appearance comparison or fixation. The ‘nudge’ will use a new type of notification to interrupt a user if they are spending too much time browsing posts with themes that might make them anxious, upset, or self-conscious. They will then be redirected to an array of ‘positive’ options to choose from to ‘explore next’.

Areas of Concern

Potential Risks

4. ‘Take a break’

What is it?

This updated feature will allow users to enable a notification system that will interrupt their in-app activity with a ‘Time to take a break?’ message after a select period: 10, 20, or 30 minutes. A list of possible alternative activities (such as ‘Go for a walk’, ‘Listen to music’, or ‘Write down what you’re thinking’) may also appear. Initial testing showed that 90% of users kept this feature enabled.

New ‘break’ reminders will now feature well-known Instagram creators to increase screentime awareness and encourage users to take a break from the online environment.

Areas of Concern

Potential Risks

5. Partnership with Yoti

What is it?

On June 23rd, 2022, Instagram announced a partnership with Yoti – the age verification system approved by the Home Office. They claim Yoti’s ‘privacy-preserving’ technology will help them provide options for users to verify their age on the platform, allowing them to offer more ‘age-appropriate experiences’. Yoti uses Artificial Intelligence to estimate the age of a user and verify their identity against a provided image, piece of ID, and/or short video selfie. This detects whether a user has given the wrong age or if they have taken a photo from the internet. Yoti is currently used by platforms like Yubo and is GDPR compliant.

This testing is only available to selected users in the US. We will continue to monitor its progress and will update our information once it is released in the UK.

Download resource

Share this with your friends, family and colleagues

Join our Online Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online.

Sign Up

Pause, Think
and Plan

Guidance on how to talk to the children in your care about online risks.

Watch video on youtube
Image of a collection of Safer Schools Resources in relation to the Home Learning Hub

Visit the Home Learning Hub!

The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.

Visit Website
Go to Top