White silhouette of a girl with glasses and a ponytail on a black background.

The Technology

Used by Big Sister is developed and tested in house and is patent pending.

5 sections showing detection methods: AI, NLP, ML, patterns and key words, apps and sites

DEtection Methods

Big Sister uses five complex detection methods to identify potential dangers. Our Danger Detection algorithm encodes child safety advice and is guided by our panel of experts. Explore the technology below.

  • NLP uses code to analyse text, interpret tone, emotion, and context, including:

    • Detecting sarcasm

    • Identifying bullying language, even when unrecognized by victims

    • Flagging violent rhetoric as potential radicalisation indicators

    • Recognizing "love bombing" as a possible grooming tactic

    This technology enables deeper understanding of communication, aiding in early detection of concerning behaviors.

  • AI in Big Sister focuses on child protection through:

    Standalone code for image analysis, detecting:

    • Problematic objects (guns, drugs, radicalisation symbols)

    • Nudity

    • Text extraction from images

    Automated processing to avoid human exposure to sensitive content.

    High-speed analysis of large image volumes

    Human oversight:

    • Monthly audits

    • Algorithm training and testing

    This approach balances privacy concerns with child safety, ensuring sensitive data remains protected while effectively identifying potential threats.

    Privacy Safeguards:

    • No external data sharing

    • Limited human interaction with sensitive content

    • Regular audits to maintain system integrity

    By using AI for initial screening, Big Sister minimizes privacy risks associated with human review of sensitive materials while maximizing child protection efforts.

  • Some apps are never appropriate for children (18+, gambling, porn, dark web browsers) and some sites are not either. There are also typical words and phrases that are used when someone is radicalised. Emojis with the conjunction of NLP can give a full picture of the meaning of the word in messages and content your child is exposed to.

  • Grooming detection is crucial as it often precedes four major danger categories:

    1. Extreme content

    2. Exploitation

    3. Child sexual abuse

    4. Illegal activities

    Key points:

    • Encoded grooming patterns enable early detection

    • AI identifies cumulative patterns indicating risks like:

      • Suicide

      • Eating disorders

    This approach allows for proactive intervention in potentially dangerous situations involving children.

  • A sudden change in behaviour or a sudden new contact non normal behaviour is a very early indicator of a safeguarding issue. Big Sister gathers “normal” behaviour for a while and then can trigger for non normal and sudden changes.

Explore our Responsible AI policy