New Frontiers, New Harms: AI Chatbots and Violence Against Women and Girls

It is no understatement that AI chatbots have become a part of daily life for many people, at least since OpenAI released ChatGPT in 2022. But, as is so often the case, as exciting new technologies become available, they can also introduce new forms of gender-based violence. As has been seen with deepfake generation technology and ‘nudify’ apps, the law often lags far behind technological developments, and the harms have become entrenched by the time law reform has caught up.

A new report, Invisible No More: How AI Chatbots are Reshaping Violence Against Women and Girls, which will be launched this afternoon in the House of Lords, shines a light on these harms, the design choices and failures in safety mechanisms that contribute to them, and the gaps in the current law that urgently need to be addressed.

Co-authored by myself, Clare McGlynn (Durham University), Stuart Macdonald (Swansea University), Rüya Tuna Toparlak (Lucerne University), Fabienne Tarrant (independent consultant), and Samantha Treacy (Swansea University), the report warns that AI chatbots are:

  • driving new forms of violence against women and girls (VAWG), such as chatbot-driven sexual harassment, grooming, and coercive and controlling behaviours;
  • enabling existing forms of VAWG, such as stalking, harassment, and image-based abuse, which are turbocharged and amplified through chatbots providing detailed and personalised advice;
  • simulating VAWG by co-producing interactive roleplays on the themes of rape and sexual violence, incest, and child sexual abuse; and
  • normalising VAWG by endorsing misogyny, reinforcing harmful norms, and trivialising gender-based violence.

The legal situation is complex. An AI chatbot cannot form the requisite mens rea (mental element), nor is it desirable to anthropomorphise AI in this way as bearing responsibility for its actions. Instead, we examine the role of companies providing these chatbots and whether criminal or civil liability could attach to them. Companies like to frame these issues as a ‘user misuse’ problem, but our report shows that design choices, training dynamics, and failures in governance enable and encourage chatbot-VAWG.  For this reason, we recommend a new criminal offence: dangerous deployment of an AI chatbot.

When it comes to civil and regulatory law, the Online Safety Act only applies to certain types of chatbot (those that are classified as ‘user-to-user’ or ‘search’ services, or services with pornographic content), resulting in a gap in protection. The Government has proposed an amendment to the Crime and Policing Bill that would give the Secretary of State the power to amend the Online Safety Act by regulation in the future, for the purposes of ‘minimising or mitigating the risks of harm to individuals in the United Kingdom presented by illegal AI-generated content or the use of AI services for the commission or facilitation of priority offences’. Notwithstanding the obvious issue that this proposal would circumvent important Parliamentary scrutiny, we do not believe this is the best approach, because it fails to address the full breadth of harms, and lacks certainty and clarity. Instead, we see a real need for general AI Safety legislation, which would impose a set of positive duties on AI developers and deployers.

One key theme that emerged from our engagement with stakeholders was the access to justice issues that emerge with suing privately, which can be time-consuming and expensive, and there are still many important issues (e.g. whether chatbot providers owe a duty of care to their users and/or to the wider public) that apparently need to be resolved through case law. For this reason, we propose the establishment of an individual right of action for those harmed by AI, and an online safety regulator with the ability to handle individual complaints quickly and cost-effectively.

My co-authors and I are grateful to UKRI for supporting this research, to Baroness Rosie Boycott for hosting the launch event this evening, and to all the stakeholders who took the time to be interviewed for this research.

Yvonne McDermott Rees is Professor of Law at Swansea University.


Discover more from Doing Feminist Legal Work

Subscribe to get the latest posts sent to your email.

Discover more from Doing Feminist Legal Work

Subscribe now to keep reading and get access to the full archive.

Continue reading

Doing Feminist Legal Work
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.