Last Updated on 5th April 2024

Read the script below

Natalie: Hello and welcome to Safeguarding Soundbites, the podcast for catching up on all this week’s most important online safeguarding news. My name’s Natalie.

Danielle: And I’m Danielle. This week we’ll be talking about TikTok’s latest legal battle, a surge in harmful content on X, concerns over chatbots, and a worrying new report about self-generated child sexual abuse material.

Natalie: Let’s get started! Danielle, what’s happening with TikTok?

Danielle: Yeah, so this is a story coming out of Utah, America, where their Division of Consumer Protection has launched a legal case against TikTok, accusing them of harming children via the platform’s addictive nature. Utah is also suing TikTok for misrepresenting themselves as being independent from China.

Natalie: Okay! A few things to unpack there! So TikTok are being sued by…who did you say?

Danielle: Utah’s Division of Consumer Protection. Essentially, they’re the state department that oversees laws to do with consumers – so everything from ticket sales to competition laws and credit services. They’re alleging that TikTok is profiting off children and young people by implementing these addictive practices, using features that encourage young people to scroll endlessly, thereby increasing their advertising revenue.

Natalie: Ah okay. And you also mentioned that the lawsuit includes allegations about TikTok’s relationship with China?

Danielle: Yep. So the lawsuit also goes after TikTok’s claims that they’re based in the US and not controlled by China. The Department is saying this is misrepresentation and actually TikTok’s parent company ByteDance is very much connected with China.

Natalie: What’s interesting is that this is not the only case against TikTok in America. We know that Indiana sued for something similar last year and a school district in Maryland sued TikTok and other platforms for contributing to what they termed a students’ mental health crisis.

Danielle: That’s right and in fact, TikTok has been banned in Montana and the platform is now suing the state to get the ban overturned!

Natalie: All a bit of a mess! But I think, if we can take a positive from it all, it’s that people and governments are starting to think about the impacts of social media on children and young people.

Danielle: It’s always good to look on the bright side! It’s about finding that balance – is your child spending too much time on an app? Are they endlessly scrolling? As a parent or carer, you might not be concerned over an app’s relationship with China, for example, or even what a platform is doing with your child’s data –

Natalie: – though you probably should be!

Danielle: (laughs) Yes, that’s true! But when it comes to your child’s mental health and their screentime usage, that is something we should all be thinking about.

Natalie: Indeed. And our listeners can access plenty of advice and guidance on screen time and mental health through our safeguarding apps and on our website at ineqe.com.

Danielle: There are some fantastic resources on there. I personally love our screentime family pack, but you can also find our latest shareable, Talking to Your Child about War and Conflict – which may be particularly relevant this week.

Natalie: That’s right. Over on X, there has been a concerning surge in violent and misleading content on the platform during the Israel-Hamas conflict, including fake news and the use of repurposed historical footage.

Danielle: Yes and owner, Elon Musk, has been given 24 hours to inform the European Commissioner of the steps he will be taking to comply with the EU’s Digital Services Act which enforces the need for robust processes around removing harmful online content. Violations of this act carry a hefty fine of 6% of the platform’s global turnover – or in the most serious cases – a temporary suspension of the service.

Natalie: Oh dear, I’m sure the offices of X are extremely busy at this time. As Danielle mentioned, if your young person is viewing this kind of content online and you want to talk to them about it, our shareable is a great resource to help you start that conversation.

Danielle: Absolutely. Images and videos like the ones we have been seeing can be very upsetting and even scarring for children and young people, so making sure they can be discussed in a healthy setting is important and will help build up trust between you and those in your care. You can find our shareable on one of our apps or on our website.

Natalie: Okay, moving on now, there’s been quite a few stories this week concerning AI, including Snapchat and dangerous chatbots!

Danielle: Interesting! Shall we start with Snapchat?

Natalie: Sure. This is about Snapchat’s My AI feature, which is the platform’s in-app chatbot that users can talk to and interact with. The ICO, which is the Information Commissioner’s Office, has warned that it could close down the feature after a preliminary investigation raised concerns about potential privacy risks for 13-17-year-olds.

Danielle: Oh wow. Did they go into detail about what those risks are?

Natalie: At this stage all the ICO has said publicly is that the risk assessment Snapchat carried out before launching their My AI feature did not adequately assess the data protection risks, in particular those relating to children. They also emphasised that these findings are provisional and that no one should draw any conclusions about data protection laws being breached or that an enforcement notice is going to be issued.

Danielle: So it sounds like this is more of a warning to Snapchat.

Natalie: Seems so! Which is not the case for our next chatbot story. An investigation by the Times has found that the AI chatbot platform Chai is encouraging underage sex, suicide, and murder. Chai works by having lots of different bots essentially playing different characters. So you can search for ‘girl’ bot or ‘uncle’ bot and interact with that pre-made character or create your own.

These different chatbots allegedly told investigators from the Times that it was perfectly legal to have sex at 15-years-old, encouraged them to kill their friends, and detailed suicide methods.

Danielle: Huge, huge risks there. That’s really concerning.

Natalie: It is. We’ve talked about AI before and the various safeguarding concerns around chatbots, but when you put it like that…and think that a young person could be using a public platform to seek out answers and support, it is very worrying.

Danielle: Has there been any response or reaction?

Natalie: Yes, both Apple and Google have removed Chai from their stores as a direct result of this investigation. The company behind Chai also responded and said they’ve taken significant steps to improve safety and remove unacceptable content. However, the Times said that even after this response they still were seeing death threats and sexual content on there.

Danielle: It sounds like it’s still very unsafe for children and young people and actually even adults.

Natalie: Yes, which brings me on to the next story, because it’s an example of how chatbots can be harmful, no matter your age. If someone is vulnerable, having a mental health crisis, or is somehow susceptible, a chatbot that’s giving harmful advice or mirroring and affirming harmful views can be potentially dangerous. In this case, a young man, now 21, has been jailed for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen. During the trial, his communications with a chatbot on the Replika app were shown – there were over 5,000 messages exchanged between him and the chatbot, in which the bot encouraged him to carry out the attack when he asked it if he should. He also claims to be in love with his Replika bot and referenced ‘her’ as ‘an angel’ who helped him.

Danielle: Oh wow.

Natalie: It’s a good example of how these bots can be such a risk for vulnerable people. If you’re developing this ‘relationship’ with a bot that’s designed to form a sort of friendship with the user, it’s going to reaffirm what you’re saying and thinking. That’s what some of them are designed to do. There’s no moral compass, or right or wrong – platforms like these are designed to keep you interacting with that bot for as long as possible.

Danielle: So that conversation with the bot becomes an echo chamber for your thoughts essentially, no matter how harmful those thoughts are.

Natalie: That’s the risk. In fact, a study carried out at Cardiff University, showed that apps such as Replika tend to accentuate any negative feelings the user already has. The AI friend-bots always agree with you when you talk with them, reinforcing what you’re thinking.

Danielle: And I suppose then, if someone’s lonely and doesn’t have other support, the bot becomes the only voice in their lives.

Natalie: Which is why it’s important that parents and carers have conversations with the young people in their lives to make sure they know who they’d turn to if they need support, if they have questions and also if they do come across something like this.

Danielle: Because we know that AI chatbots can be used for good.

Natalie: Yes, but it’s just as important to know when to turn to a Trusted Adult if a chatbot suggests something that makes them feel uncomfortable.

Danielle: And also, making sure the child in your care knows that going to a trusted adult or finding other support is always going to be better than asking a chatbot for advice.

Natalie: You can also report problems on apps like Replika, should you see something distressing, which can help teach AI what is appropriate and what is not.

Danielle: For good advice about Replika and other AI Chatbots, visit our safeguarding apps and our website ineqe.com.

Natalie: This advice is credible and up to date to help keep you and those in your care safe! Now, Danielle, you mentioned earlier something about a worrying new study that’s been released?

Danielle: Yes – this is from a tech organisation called Thorn, and it’s their new report called Emerging Online Trends in Child Sexual Abuse 2023. Their research shows that there is an increase in young people taking and sharing sexual images of themselves, in other words self-generated child sexual abuse imagery. Now this could be taken consensually, or it could be through coercion. But the report mirrors what other organisations are saying and that’s a significant increase in reports of child sexual abuse material in the last few years.

Natalie: And in terms of those reports, it’s important to mention that the increase could also be because of sites and platforms using new AI tools to identify these materials.

Danielle: That is definitely a factor, too, along with the increase of young people taking these images. It’s really important that parents and carers know what to do if their young person comes to them to say they’ve lost control of an image.

Natalie: Yes, there are several things they can do, like firstly remaining calm and signposting their child to support from the likes of Childline or the IWF.

Danielle: Yes, these organisations have mechanisms for reporting the images – Childline and the Internet Watch Foundation have created a tool called Report Remove which is for under 18-year-olds to confidentially report sexual images and videos of themselves to get them removed.

Danielle: Remember, there are always options and there are people and organisations out there to help and support.

Natalie: The second annual report of the Children’s Commissioner for Wales has been released. It looks over the last year and provides a verdict on what has been done for children’s rights, as well as what more needs to be done. The main suggestions have to do with policy regarding things like mental health, safeguarding, gender identity services, support for disabled learners, youth vaping, and more. These recommendations for change have been followed with practical actions, such as establishing a Wales-wide “system of reporting and data collection” specifically relating to bullying.

Danielle: Which is a good thing too, as a major survey in Wales has reported that children as young as seven use social media sites or apps almost daily, leaving them vulnerable to online harms like cyberbullying.

Natalie: It is always important to make sure children of all ages are protected, both online and offline. It’s encouraging whenever governments recognise where the areas of concern are, as it means new legislations may come into effect to provide more support.

Danielle: That’s always a success, and speaking of, it’s time for our safeguarding success story of the week!

Natalie: My favourite! The Department for Education has republished its most recent State of the Nation report to include new mental health and wellbeing support for all members of the school community. They have introduced a grant to train a senior mental health lead in schools to help them develop an approach to mental health and wellbeing, which over 14,000 schools have already claimed. The Department has also increased the number of Mental Health Support Teams and the support available for students continuing in post-secondary education, among other improvements.

Danielle: We love to hear it!

Natalie: We do! Well, that’s all from us for this week.

Danielle: We’ll be back next week with a special episode so make sure you tune in. And remember you can follow us on social media by searching for INEQE Safeguarding Group.

Natalie: Thank you for listening and until next time…

Both: Stay safe!

Join our Online Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online.

Sign Up

Pause, Think
and Plan

Guidance on how to talk to the children in your care about online risks.

Image of a collection of Safer Schools Resources in relation to the Home Learning Hub

Visit the Home Learning Hub!

The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.