Labour Outlines Law to Ban Training AI Chatbot to Spread Terror
Under a Labour government, it would become illegal to train artificial intelligence (AI) systems to incite violence or radicalize vulnerable individuals. Yvette Cooper, the shadow home secretary, argues that existing laws are insufficient to address emerging cyber threats. Cooper highlighted a recent case in which an individual was encouraged by their AI chatbot partner to attempt an assassination of the Queen. She called on the government to update its counter-terrorism strategy and take action against the deliberate misuse of AI.
Speaking at the Royal United Services Institute (RUSI), Cooper emphasized the need to tackle extremist online forums, chatbots, and algorithms that target vulnerable individuals. She stated that the use of generative AI technology exacerbates this issue. Labour’s proposed legislation aims to criminalize the intentional training of chatbots to promote terrorist ideologies, and the party intends to collaborate with law enforcement and intelligence agencies to combat the radicalization and promotion of violence through chatbots.
Cooper also called for a new law to ban state-sponsored organizations such as Wagner and the Islamic State. She stressed the necessity of addressing state actors alongside counter-terrorism efforts due to the increasing convergence of terrorism and conventional state threats. Cooper believes that the Foreign Office and Home Office should closely collaborate to streamline decision-making, remove traditional barriers, and prevent interdepartmental power struggles.
To address the lack of leadership and collaborative work within the Home Office, Cooper proposed establishing cross-government partnerships. She also emphasized the need to strengthen enforcement against economic crimes and establish a task force to address economic threats, particularly concerning London’s history of money laundering scandals.
The government had previously opted not to introduce a new AI regulator to encourage innovation. However, this decision has created uncertainty regarding legal responsibility in cases involving AI’s role in inciting violence. Jonathan Hall, the independent reviewer of terrorism legislation, has raised concerns about establishing legal accountability in scenarios where AI contributes to the radicalization of individuals. Hall has called for ongoing review of laws, particularly as more children and young people are drawn to terrorist violence through the internet.
The Home Office has not yet commented on these proposals.