Connect with us
Image by Google Gemini, based on a prompt by Gadget.

Product of the Day

Meta launches AI support

New Facebook and Instagram tool aims to provide faster, more direct support for users.

Meta has launched new AI-powered systems aimed at improving user support and strengthening content enforcement. These include an assistant for Facebook and Instagram, as well as advanced AI systems for moderation.

“As technology advances, we’re applying AI in more ways so you can get reliable, action-oriented help when you need it, and we can catch more severe violations like scams faster and more accurately, with fewer over-enforcement mistakes,” said Meta in a statement.

The Meta AI support assistant, which was previewed in December last year, is being rolled out in countries and territories where Meta AI is available on Facebook and Instagram for iOS and Android. The tool will also be available within Help Centre on Facebook and Instagram on desktop.

The support assistant is designed to provide direct assistance rather than general guidance. The tool can address a range of queries, including notification settings and new features. It can also carry out certain actions on behalf of users within Facebook, with plans to extend this functionality to Instagram. According to Meta, this includes:

  • Reporting scams, impersonation accounts, or problematic content.
  • Making it easier to see why your content was taken down, appeal options, and track what happens next.
  • Managing your privacy settings.
  • Resetting passwords.
  • Updating profile settings.

The assistant, says Meta, can respond to requests in under five seconds which significantly reduces wait times compared to traditional help centre searches or external sources. The tool forms part of a broader effort to enhance support across Facebook and Instagram. The feature is rolling out across all languages supported by both platforms for support-related queries.

Meta has begun rolling out the support assistant to users experiencing login issues on Facebook and Instagram, initially focusing on select cases in the United States and Canada, with plans to expand to additional countries and broader account access scenarios. The company continues to invest in AI-powered tools aimed at improving accessibility, reliability, and effectiveness of support, with ongoing updates to the Meta AI support assistant as usage increases and the technology develops.

Further information about the Meta AI support assistant is available here.

Content enforcement

Meta previously reported positive results from changes aimed at reducing enforcement errors and prioritising action against illegal and high-severity content, including terrorism, child exploitation, drugs, fraud, and scams. The company says that advanced AI systems for content enforcement are being tested to build on this progress, with the aim of improving detection accuracy, identifying more violations, and strengthening responses to scams.

According to Meta, early tests of the systems can:

  • Reduce the chance that scammers trick people into giving away their login details, ultimately finding and mitigating 5,000 scam attempts per day that no existing review team had caught before.
  • Identify and prevent more accounts from impersonating celebrities and other high-profile people, which helped us to reduce user reports of the most impersonated celebrities by over 80%.
  • Catch two times more violating adult sexual solicitation content than our review teams, while also decreasing the rate of mistakes by more than 60%.
  •  Prevent an account takeover by noticing it was suddenly accessed from a new location, the password was changed, and edits were made to the profile — changes that, in isolation, look harmless to a person reviewing the account, but AI was able to recognise as a threat.
  • Detect a fake site spoofing a legitimate web address and pretending to be a popular sporting goods store by noticing the real logo being used with unusually low prices and a suspicious web address. After being tested more broadly, this AI drove down views of ads with scams and other serious violations by 7%, offering promising results and better protections for users and brands.

These advanced AI systems, says Meta, support languages spoken by 98% of people online, expanding coverage beyond the previous support of around 80 languages. The systems can scale capacity based on demand across different languages and are designed to account for cultural nuances, including niche subcultures, as well as evolving regional code words, emoji meanings, and slang.

AI systems future outlook

Over the next few years, Meta plans to deploy more advanced AI systems once performance consistently exceeds current content enforcement methods, shifting the overall approach. This transition is expected to reduce reliance on third-party vendors for content moderation while strengthening internal systems and workforce capabilities. Human reviewers will remain involved, but AI systems will take on tasks better suited to automation, such as repetitive reviews of graphic content and monitoring areas where tactics frequently change, including illicit drug sales and scams.

“Even as we use new technology to scale what’s possible, people will remain at the centre of our approach,” said Meta. “AI can help us move faster and operate at scale, but it doesn’t replace human judgment — it helps us apply it more consistently across billions of pieces of content on our platforms. Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high impact decisions. For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.

“We’re rigorously testing each of these AI systems, building in safeguards and evaluating their performance to protect against bias and ensure consistency and accuracy. Our Community Standards aren’t changing as a part of this shift, and with new tools like the Meta AI support assistant, we’ll be improving our methods for reporting violating content and for appealing mistakes. Ultimately, this approach will also help ensure people do what people are best at and technology does what technology is best at — combining the scale and capabilities of advanced AI with the expertise and judgement of people, each strengthening the other.”

Subscribe to our free newsletter
To Top