Loading...

Last Updated on 5th April 2024

Read the script below

Hello and welcome to this week’s Safeguarding Soundbites, the podcast that gives you all the latest safeguarding updates and news that you can use to keep the children and young people in your life safer online.

February is here and that means one thing…no, not Valentine’s Day – Safer Internet Day! In case you missed it last week, we’ve created a comprehensive list of all our best resources in honour of Safer Internet Day – which is next week on the 7th. You can use our resources to help kickstart the conversation with the child or young person in your care about everything to do with online safety, from age verification to Snapchat and BeReal. Head over to our website at ineqe.com to find those free resources.

New research from the Children’s Commissioner for England has found that one in 10 children have viewed pornography by the time they are nine years old. Twitter was the platform where the highest percentage of children had seen pornography – 41%, compared to 37% viewed on dedicated pornography sites. The research also raised concerns about how the use of physical aggression in pornography is affecting young people’s views, with 47% of young people aged between 16 and 21 believing that girls “expect” physical aggression in sex.

In social media news, Twitter has said they will take less severe action for rule breakers and from now on will only suspend Twitter accounts that engage in “severe or ongoing, repeat violations” of its rules. Rather than ban users, Twitter will be more likely to limit the visibility of a tweet or tell a user to remove a tweet. They will also be allowing users to appeal suspensions.

Although some will no doubt welcome the changes, the relaxing of consequences for rule breaking could feed into a more worrying bigger picture – Twitter has implemented a number of changes, including the layoffs of content moderation staff, that raises the question of how safe the social media platform is for users, particularly young people.

The Antisemitism Policy Trust has raised concerns over how algorithms help hate to spread online, such as Holocaust denial and antisemitic conspiracy theories. The Trust’s chief executive Danny Stone said he welcomed the upcoming Online Safety Bill but highlighted how it won’t cover legal but harmful materials, including search companies prompting people towards harmful searches. You can find out more about the Online Safety Bill by searching our website at ineqe.com.

A new research paper has revealed the apps and platforms that children feel most unsafe using. The Pupil Safeguarding Review investigated children’s feelings of safety in school and beyond. It found Roblox was the platform where children felt most unsafe, with 15% of children surveyed naming the online gaming platform. Our Online Safety Experts took a look into the top five named – you can find that on our website at ineqe.com.

New data from the Home Office has shown that schools account for the most referrals to the Government’s anti-terror programme. The Prevent programme is a government-led, multi-agency programme that aims to stop people from supporting or engaging in terrorism or extremism. The educational sector makes up 36% of all referrals, with the majority being for male pupils and children aged of 15-20.

Extremism and the radicalisation of young people has been of growing concern in the past few years, with many worried about ease of access to materials online. Young people who feel isolated or lonely have previously been targeted and radicalised online. The rise of online personalities who promote misogynist behaviours, such as Andrew Tate, and incels show how some vulnerable young people can be indoctrinated, particularly when it comes dressed up as a community that supports one another. To find out more about harmful content online, search ineqe.com for our ‘review of harmful content’.

And finally, if you’re one of the million users enjoying ChatGPT, the AI chatbot that students love and teachers hate, be warned: there’s an enemy AI hot on its heels! The new tool from OpenAI is designed to detect whether text has been written by artificial intelligence, such as ChatGPT. The text-generating tool has become popular for its ability to create human-like responses and essays to prompts, resulting in its use being banned from schools and universities. However, the new tool used to detect whether or not AI has been used is unlikely to send the chatty bot to the AI graveyard – its makers say it’s currently pretty unreliable.

That’s all from me for this week – join me next time for more news and safeguarding updates. In the meantime, you can catch us on social media by searching for ‘ineqe safeguarding group’. If you’re listening via a podcast player or app, make sure to subscribe. Stay safe and I’ll speak to you next week!

Join our Online Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online.

Sign Up

Pause, Think
and Plan

Guidance on how to talk to the children in your care about online risks.

Image of a collection of Safer Schools Resources in relation to the Home Learning Hub

Visit the Home Learning Hub!

The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.

2024-04-05T09:48:20+01:00
Go to Top