Gadget

Can AI toys harm 
your kids?

A study conducted by Consumer Reports has found that 78% of smart toys tested had security vulnerabilities that could potentially expose children’s personal information.

“What seems harmless fun might be camouflaging a nefarious reality,” says tech security firm, Zenshield. While this seems out of place when discussing toys, they are not speaking about traditional toys. Instead, the focus lies on the artificial intelligence (AI) toys that have become hot favorites among parents and children worldwide.

Yet, more than three quarters of these bright, engaging play gadgets are said to have hidden software vulnerabilities. These vulnerabilities open the door to potential cyberattacks, introducing a worrying aspect to what was once considered harmless fun.

These vulnerabilities can allow unauthorised users to gain control of the toys.,” says Steffan Black of Zenshield. “They can access sensitive data, including personal information and even, disturbingly, use the toy’s audio and video features for surveillance.”

Know the Potential Threats

Before you decide to discard all your child’s AI toys, it’s essential to comprehend the potential threats these vulnerabilities pose. Among the top concerns are:

  1. Data Breach:

According to a report by Cybersecurity Ventures, there has been a steady increase in data breaches, with the number expected to reach 150-billion records by 2023.

In a study conducted by Consumer Reports, it was found that 78% of smart toys tested had security vulnerabilities that could potentially expose children’s personal information.

  1. Unauthorised Access:

The Federal Trade Commission (FTC) reported a 267% increase in IoT (Internet of Things) related security vulnerabilities from 2017 to 2018.

A study by Rapid7 revealed that 90% of tested IoT devices, including some AI toys, were susceptible to at least one security vulnerability that could allow unauthorised access.

  1. Surveillance:

The Internet of Things Security Foundation reported that incidents of IoT devices being used for surveillance purposes increased by 72% in the past year.

In a survey conducted by NortonLifeLock, 45% of parents expressed concerns about the potential misuse of smart toys’ cameras and microphones for unauthorized surveillance.

Safeguarding Your Children’s Privacy

Recognising the threats is the first step. The next involves strategising on how best to safeguard your child’s privacy while using these AI toys.

“Mitigating the risks associated with AI toys is crucial,” says Black. “It isn’t about discarding them altogether; it’s about using them smartly and securely.”

Here are some actionable tips from Zenshield:

Turning the Tables

To protect your children in this technologically advanced reality, awareness is your greatest tool. Understanding that threats exist even in seemingly innocuous toys is the starting point. Following through with protective measures maintains the safety net.

While the data might seem alarming at first glance, it shouldn’t be used to demonise technology or AI toys altogether. 

As Black sums it up, “AI toys can provide incredible interactive educational opportunities for children. We don’t need to fear them. We must respect them by recognizing their potentials and being smart in how we use them.”

Hence, it’s not about taking away the toys — it’s about redefining how we use them. Equip yourself with the correct knowledge and the right security measures. Change the game from villains and vulnerabilities to safety and smart usage. 

Exit mobile version