Last Updated on 25th June 2024

Read the script below

Colin: Hello and welcome to Safeguarding Soundbites.

Danielle: This is the podcast where you can get all the latest safeguarding news, advice and alerts.

Colin: And today’s episode is a very special one as it’s the last one before Christmas!

Danielle: It’s hard to believe it’s that time already! Colin, are you all sorted for the big day?

Colin: Nearly, Danielle. There may be a few last-minute trips to the shop, but I think Santa has almost everything he needs…how about you?

Danielle: We’re done and dusted in my household, just the wrapping to go.

Colin: Oh that’s the worst bit!

Danielle: No, I love it! Rr

Colin: Doesn’t sound so bad now! But before we get stuck into the festivities too much, what’s happening this week?

Danielle: So Meta have begun rolling out end-to-end encryption which has caused quite a lot of concern, TikTok has added new comment filtering to safeguarding users against hate speech, and we’ll be chatting about why AI is making us all less trusting.

Colin: And, as always, we’ll be finishing off today’s episode with our safeguarding success story of the week, which this week has a very special Christmas theme!

Danielle: Intriguing! Let’s start off with Meta and the news that their plan to introduce end-to-end encryption on Facebook and Messenger is underway.

Colin: So, we have talked about this end-to-end encryption before but for our new listeners or anyone who needs a reminder, this is a communication system that blocks third parties from viewing or reading materials sent between individuals. In other words, it’s about privacy; only you and the person or people you’re communicating with can access your messages.

Danielle: Exactly, Colin. So WhatsApp already has this technology and now Meta is rolling this out to Messenger and Facebook too. Voice calls, messages…basically all personal communications made on Facebook or Messenger will be covered by this end-to-end encryption. And while many people see this as a win for privacy, it does raise safeguarding concerns. In fact, many organisations have already spoken out about this, including the NPCC – the National Police Chiefs’ Council. They’ve said that this will have a dangerous impact on child safety: Meta won’t be able to see messages from online groomers containing child sexual abuse materials and therefore they can’t make a referral to the police. They’ve also accused social media companies of putting the safety of children at risk, ignoring warnings from child safety charities and experts.

Colin: Our CEO Jim Gamble spoke to Times Radio about Meta’s decision but noted that while tech companies can and should do more, the problem is wider still. The failure to create a meaningful deterrent lies at the feet of the CJS and the government, who bent when it came to the Online Safety Act. They created ‘constructive ambiguity’ when it comes to encryption. He further noted that the government had an opportunity to prevent it but didn’t.

Danielle: Yes, it’s a more complex issue that simply putting the blame entirely on tech companies. And just a final note on this story, this isn’t the only update Meta have started rolling out. Users on Messenger will also be able to edit messages for up to 15 minutes after sending them, disappearing messages have been changed to 24 hours and Meta also say they’ve made improvements to allow photos and videos to be easier to access in Messenger.

Colin: Okay. Thank you, Danielle, for that update. Let’s move on now to TikTok who are also rolling out a new feature but this time it’s all about comment filtering. So previously TikTok have been criticised for their inaction when it comes to comment moderation, particularly in relation to the Israel-Hamas war. Now, the social media platform is launching some new initiatives in a bid to clamp down on hate and discrimination, such as antisemitism and Islamophobia. As well as starting a new anti-hate and discrimination task force, TikTok is introducing ‘Comment Care Mode’ which filters comments deemed similar to ones a user has previously reported or deleted, or comments which are inappropriate, offensive or contain profanity.

Danielle: Sounds positive.

Colin: It does but I want to point out two things to our listeners – firstly that this mode has to be enabled. And secondly, it seems to be for content creators specifically. That means it’s for users that are uploading videos to the platform so it’s not necessarily going to stop other users who are browsing other people’s videos from seeing offensive and inappropriate comments.

Danielle: Ah okay. Well, I would just encourage listeners who have young people in their care using TikTok to head over to Our Safety Centre at oursafetycentre.co.uk…there’s lots of information on there about blocking, reporting and using the safety settings on TikTok.

Now Colin, have you ever seen a Pope in a puffer jacket?

Colin: I can’t say I have, Danielle…

Danielle: Well I have – and I wonder if some of our listeners might have too. But the image I’m referring to went viral on social media back in March and was of course fake. However, it’s images like these that, according to fact checking charity Full Fact, that have eroded public trust in what they see online. The charity has warned the rise in AI-generated images is reducing trust in online information and that many struggle to spot these types of images.

Colin: Some of the AI-generated content is incredibly realistic.

Danielle: It really is. And Full Fact’s Chief Executive Chris Morris has said it’s unfair to expect the public to rely on news outlets or fact checkers to tackle the issue and that the new Online Safety Act -formerly known as the Online Safety Bill before it came into law- fails to deal with harms from AI-generated content.

Colin: And of course, if it’s difficult for adults to spot whether something’s AI-generated, it’s going to be difficult for children and some young people, too.

Danielle: Yes, absolutely. It’s important that parents, carers and school staff are talking to the children and young people in their care about the importance of fact checking. For example, before you share anything online, stop and think: is this real? Especially if it’s derogatory or could be harmful to the person featured in the image. Okay so maybe the Pope in a puffer jacket is relatively harmless but we have seen images created to bully, harass, mock, and in extreme cases, create AI-generated child sexual abuse material.

Colin: You’re absolutely right, Danielle. It doesn’t seem like AI is going away anytime soon and over time, these images are getting even more realistic looking. So having those conversations with children and young people is really important where we can talk about how we can fact check an image. For example, ask yourself, is it likely that the event depicted happened? Can we find another source to back up the story? Don’t rely on looking for mistakes in images, either. Once we might have been able to check for extra legs and pixelization as a sort of ‘tell’ but as AI improves, it’s not a guaranteed way to spot AI involvement.

Danielle: Excellent point there.

Now, it’s time for a very special safeguarding success story of the week…

Colin: That’s right, it’s the news that all across the UK, schools are carrying out amazing acts to help those in need and to raise much needed funds.

Danielle: From St Patrick’s Primary School in Armagh covering the cost of all school Christmas dinners for their pupils…

Colin: To Chase Terrace Academy in Staffordshire hosting a Christmas market to raise money for the Teenage Cancer Trust.

Danielle: Necton Primary School in Norfolk hosted a Rudolph run to raise funds for The Norfolk Hospice Tapping House.

Colin: And at Hillside High School in Merseyside, each form class is bringing in donations to make up a hamper to give to a family in their school community who needs it.

Danielle: These are just some of the amazing acts schools are involved in this Christmas. It’s a very difficult time between the cost of living and funding cuts so we wanted to just take this time to say thank you to all the pupils, parents, carers and school staff who are helping make this Christmas extra special for someone.

Colin: And we’d also like to say a huge thank you to you for tuning in – it’s been a great year here at Safeguarding Soundbites.

Danielle: And in fact, we’ll be back in January with a special 2023 round-up episode.

Colin: Plus, we’ll be looking to the year ahead and talking about emerging trends and risks in the digital world that you need to be aware of so make sure you watch out for that in January. Until then, stay up to date by following us on social media, just search for Ineqe Safeguarding Group.

Danielle: You can also tune in to our brand-new Online Safety Show for children and young people!

Colin: That’s right! Our new monthly show is now available, with episode two out now. Each episode we’ll be talking about the top online safety news, topics and trends that matter to children and young people, speaking to experts, and getting good advice that can be used to stay safer online.

Danielle: So make sure you check that out in our Safer Schools Apps and online Teach Hub platform.

Until next time, it’s goodbye and –

Both: Have a safe and Merry Christmas!

Join our Online Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online.

Sign Up

Pause, Think
and Plan

Guidance on how to talk to the children in your care about online risks.

Image of a collection of Safer Schools Resources in relation to the Home Learning Hub

Visit the Home Learning Hub!

The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.