RESPONSIBLE AI

Big Sister is committed to provide AI for the good of families and society rather than to exploit them. Please read this page for our core ethical principles underlying our technology and our commitments to responsible AI.

Our Responsible AI policy will provide the details of how we back up these principles and commitments.



Core ethical principles

Ethical AI for children’s safety is hard but worth it: The application of AI to address the critical social issue of child protection online is a responsible use of AI. Big Sister does not underestimate how difficult the problem of child safety is but it is still worth tackling.

The child’s voice, consent and privacy is central to Big Sister’s Consent-Based Model: The requirement for child consent is crucial. Explaining the consent so a child can be informed and easily remove their consent is key. Actively seeking the child’s voice is a priority development and continual improvement.

Big Sister is a Tool for Empowerment, Not Monitoring. Big Sister will provide parents with targeted insights and resources designed to facilitate constructive conversations and build understanding with their children, rather than simply enabling passive surveillance.

Child Safety in the Digital Age is a Shared Responsibility: Big Sister is not the “silver bullet” solution to online child safety nor a replacement for parental involvement, but is a crucial partner for parents, educators, and the wider community in navigating the complexities of online safety.

Ethical AI is Foundational, Especially with Children's Data: Children are the most vulnerable in our societies, particularly those with Special Educational Needs, Disabilities and those facing economic hardship. Big Sister is working for the most vulnerable people and recognises our policies and ethics are even more important due to our vulnerable user base.


Big Sister Commitments

Continuous Improvement in Safety, Efficacy, and Ethics; We will keep learning and improving our ethical practice and responsible use of AI. We will actively seek feedback and listen to policy makers and children. We will make changes to constantly evolve as tech and society evolve.

High Standards; Big Sister will strive to maintain the highest standards of responsible AI to reflect our vulnerable userbase. We will provide clearly articulated ethical frameworks, Governance models, and mechanisms for oversight of our technology.

Big Sister commits to invest in Responsible AI: Some companies will trade off  between investment in responsible AI practices with the demands of a sustainable business model and competitive pricing. For Big Sister, if the ethics aren’t there the business model is not viable. Responsible AI measures such as comprehensive bias audits, regular ethical reviews, sophisticated data governance infrastructure, and expert consultations is absolutely necessary.

Commitment to Policy Engagement: Big Sister will continue to engage in UK regulatory and policy developments so that our expertise helps contribute to a safer online ecosystem for children everywhere not just for our immediate community.


Get into the detail:


Why “Big Sister”?

On a page about Responsible AI it is important to address the origins of the name “Big Sister”.

It is both a nod to George Orwell’s 1984 and a reimagining how a Big Sister used to protect their siblings before the internet.


In the dystopian novel 1984, there was an all seeing, sinister Big Brother. Big Sister is also all seeing but there the resemblance ends. She is kinder and definitely not sinister. She’s on your side.


Before the internet, you’d expect a Big Sister to know much more than their younger siblings and also to know what they were “up to”. If they were doing something potentially dangerous they would let the parent know without ruining the kid’s privacy. Our Big Sister does this too but she is just plugged onto their devices and super charged with expert advice and AI.