The problem with Instagram’s new age recognition software

If Instagram’s last year has been defined by anything, it’s the mounting evidence suggesting the platform has a negative impact on children. Despite the presumed connection between the popularity of Instagram face filters and the rise of cosmetic surgery, and widespread assumptions that social media increases insecurities in young people, Instagram had always been able to claim that it had no awareness of any adverse impacts on children’s mental health. But since the so-called Facebook Papers were leaked in autumn 2021, including an internal study that suggested Instagram, which is owned by Facebook, was having a harmful effect on teenage girls’ body image, the company has been in damage control mode.

The changes over the past nine months have included the introduction of several new controls for parents and the cancellation of an ill-timed plan to create a children’s version of the app. And now, the company has added what appears to be its most robust technological intervention to improve safeguarding for teens: facial recognition to prove the age of users.

One concern about Instagram’s effects on young people was the ease with which any child with access to a smartphone could create an account. Although the app says users have to be at least 13 to join, all kids much younger than that had to do to get around the limit was lie about their age. Instagram had developed an AI that combed through the app to try to find children who were 12 or younger, but it did not appear to be all that reliable.  

This time, rather than developing their own AI, Instagram has worked with Yoti, a company based in London that “specialises in privacy-preserving ways to verify age”. This new AI, Instagram says, will scan a video selfie provided by the user (in lieu of an ID, if the user can’t provide one), assess their age, and provide that information to Instagram; both companies will then delete the data.

“Understanding someone’s age online is a complex, industry-wide challenge,” Instagram said in its announcement. “Many people, such as teens, don’t always have access to the forms of ID that make age verification clear and simple. As an industry, we have to explore novel ways to approach the dilemma of verifying someone’s age when they don’t have an ID.”


While this appears to be a genuine technological innovation, it seems likely that this new system will ultimately prove useless, while treading into murky ethical territory – effectively making it little more than a PR exercise, with few benefits for children or parents.

In terms of reliability, the white paper on Yoti’s website shows the margin for error may make the technology redundant. The company says the AI has an accuracy rate (meaning the average error) of 1.52 years for 13 to 19-year-olds and 1.56 years for 6 to 12-year-olds. For many age verification uses, this may seem fairly accurate, but it means this AI may regularly fail to identify 11-year-olds pretending to be 13. Though it may be useful in finding extremely young children using the app, it is ill-equipped for the problem it purports to solve.

Content from our partners

“Unions are helping improve conditions for drivers like me”

Transport is the core of levelling up

The forgotten crisis: How businesses can boost biodiversity

There are ethical issues, too. AI technology has long been proven to have biases when it comes to race because of the assumptions it is usually programmed with. Yoti has said that in this case users are not individually identifiable, and that gender and skin tone bias is “minimised”, but it seems unlikely that the company will have made an AI significantly less biased than the industry standard, which is currently failing to resolve this issue. Privacy could also be of concern. Yoti and Instagram have said that they will delete the identification videos immediately after analysis is complete, and Yoti has said that it will comply with the EU’s General Data Protection Regulation, which should increase the safety of children’s data. Yet this is not a fool-proof system for keeping children’s images private: promises of self-regulation will not reassure everyone, and provide no guarantee when it comes to data breaches. 

Children’s safety on social media is a real issue – one that stretches far beyond Instagram – and it is rightly gaining the mainstream attention it has long deserved. But we also have to wonder: is keeping under-13s from making Instagram accounts really the biggest thing we can do to protect children’s safety and mental health? Or is the real issue what happens when they are of the age to have one? By offering only slick, highly technical solutions that sound robust but will inevitably prove inadequate to the task at hand, social media platforms show how uninterested they are in solving these problems, which they hope eventually will just quietly go away.

[ See also: The question is not why the birth rate is falling – it’s why anyone has kids at all ]

You May Also Like

About the Author: AKDSEO