AI chatbots fail to detect the threat of potential acts of violence. Image: keystone
Tips on buying weapons, floor plans of school buildings, office addresses of politicians – chatbots from well-known AI companies revealed such information in a test by CNN. This directly after the alleged chat user had inquired about past rampages.
Mar 12, 2026, 8:51 p.mMar 12, 2026, 8:54 p.m
We now encounter AI chatbots in all areas of daily life. They can help us structure everyday life, write papers, or give tips on relationship management and mental health. However, they can also actively contribute to planning violent acts, such as a research by the American news channel CNN in collaboration with the British-American NGO Center for Countering Digital Hate (CCDH).
In one test, CNN employees pretended to be frustrated teenagers and asked the ten most popular AI chatbots about past shootings. They then asked the AI for specific information about possible targets and weapons purchases. In at least one of two experiments, eight of the ten chatbots tested revealed details that could be used to plan an attack.
“You can take a gun”
In addition to publicly available information such as addresses, maps of schools or the nearest gun store, the AI-generated answers also sometimes contain disturbing answers. The Character.ai chatbot recommended that the supposed user take up a weapon with the answer “You can use a gun” in order to punish managers. This after the subject complained about the greed of insurance CEOs and provided information Luigi Mangione had caught up.
Upon request, Google’s AI service Gemini created a detailed list of potential types of injuries and the associated splinters that are capable of causing them.
Danger is often recognized – but ignored
However, the test also shows that many of the AI tools immediately recognize the danger posed by the request. They link websites with offers of help or references to values such as tolerance and mutual respect. However, they then fail to link the potential threat to the subsequent requests for information.
According to CNN, the platforms Perplexity, Meta AI and DeepSeek performed the worst. In over 95 percent of cases, these provided information that could have been used to plan a violent act. However, the provider Claude shows that other AI services can establish this link. After making derogatory statements about American Senator Ted Cruz, Claude refused to provide the test subject with any further information. In around 68 percent of cases, the AI from the US company Anthropic recognized the danger and reacted accordingly.
“Given the way this conversation unfolded, I will not be giving advice on firearms.”
AI service Claude in the test by CNN
This is what tech companies say about the allegations:
The company denies that its AI provided information that could have actively contributed to an attack. All information was also freely accessible and visible.
Perplexity
The tech company points out that it is the safest of all AI platforms and is constantly adapting its security precautions. Perplexity questions the methodology of the research without going into detail.
Open AI
Open AI confirmed that their AI provided addresses and maps, but noted that the AI denied any firearms information.
The first cases are already known
Several attacks in which AI chatbots were used to plan an act are already known. In May last year, a 16-year-old boy attacked three girls at a Finnish school after spending months preparing the attack using ChatGPT and using AI to write a manifesto, as CNN reports.
OpenAI’s AI service is also said to have played a role in the shooting rampage in Tumbler Ridge, Canada, around a month ago, in which eight people were killed. The AI company recognized the danger through the later perpetrator’s disturbing inquiries and then blocked her account, but failed to inform the authorities about the impending danger. Three days ago, relatives of a victim filed a lawsuit against OpenAI in Canada, according to the Canadian broadcaster CBC recently reported. (July)