The China Mail - 'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

USD -
AED 3.672502
AFN 62.999667
ALL 81.492043
AMD 367.461239
ANG 1.79046
AOA 918.0003
ARS 1385.00596
AUD 1.379111
AWG 1.8025
AZN 1.688667
BAM 1.669747
BBD 2.014096
BDT 122.750925
BGN 1.66992
BHD 0.377265
BIF 2977.01223
BMD 1
BND 1.272576
BOB 6.910389
BRL 4.903401
BSD 1.000004
BTN 95.654067
BWP 13.471587
BYN 2.786502
BYR 19600
BZD 2.011227
CAD 1.369055
CDF 2225.000229
CHF 0.781299
CLF 0.022775
CLP 896.349636
CNY 6.7921
CNH 6.787195
COP 3787.27
CRC 455.222638
CUC 1
CUP 26.5
CVE 94.139393
CZK 20.78225
DJF 178.077923
DKK 6.378345
DOP 58.856926
DZD 132.483043
EGP 52.940204
ERN 15
ETB 156.142938
EUR 0.85358
FJD 2.18635
FKP 0.739209
GBP 0.740205
GEL 2.670568
GGP 0.739209
GHS 11.335462
GIP 0.739209
GMD 73.498647
GNF 8773.899421
GTQ 7.629032
GYD 209.214666
HKD 7.83063
HNL 26.593188
HRK 6.430403
HTG 130.601268
HUF 306.176019
IDR 17493
ILS 2.907745
IMP 0.739209
INR 95.65155
IQD 1309.980663
IRR 1312000.00028
ISK 122.579744
JEP 0.739209
JMD 158.150852
JOD 0.708942
JPY 157.764499
KES 129.141589
KGS 87.449974
KHR 4011.833158
KMF 420.000375
KPW 900.016801
KRW 1488.715008
KWD 0.30838
KYD 0.833362
KZT 469.348814
LAK 21915.434036
LBP 89550.577146
LKR 324.546762
LRD 183.004918
LSL 16.465169
LTL 2.95274
LVL 0.60489
LYD 6.332864
MAD 9.166688
MDL 17.150468
MGA 4152.739536
MKD 52.613162
MMK 2099.28391
MNT 3579.674299
MOP 8.066645
MRU 39.973704
MUR 46.810213
MVR 15.395264
MWK 1734.249137
MXN 17.223598
MYR 3.930499
MZN 63.910287
NAD 16.465169
NGN 1370.990111
NIO 36.79625
NOK 9.167597
NPR 153.052216
NZD 1.68578
OMR 0.384497
PAB 1.000021
PEN 3.428454
PGK 4.419687
PHP 61.405977
PKR 278.573203
PLN 3.628604
PYG 6115.348988
QAR 3.645794
RON 4.443898
RSD 100.196001
RUB 73.34847
RWF 1466.515265
SAR 3.757472
SBD 8.029009
SCR 13.955513
SDG 600.500395
SEK 9.316135
SGD 1.272165
SHP 0.746601
SLE 24.624987
SLL 20969.502105
SOS 571.511509
SRD 37.2545
STD 20697.981008
STN 20.917019
SVC 8.749995
SYP 110.578962
SZL 16.458987
THB 32.337497
TJS 9.365014
TMT 3.5
TND 2.913221
TOP 2.40776
TRY 45.417796
TTD 6.784798
TWD 31.529739
TZS 2597.650258
UAH 43.974218
UGX 3749.695849
UYU 39.725261
UZS 12145.531228
VES 504.28356
VND 26348
VUV 117.978874
WST 2.702738
XAF 560.031931
XAG 0.01148
XAU 0.000213
XCD 2.70255
XCG 1.802233
XDR 0.694969
XOF 560.000854
XPF 101.817188
YER 238.64978
ZAR 16.449901
ZMK 9001.201236
ZMW 18.875077
ZWL 321.999592
  • BCC

    -1.2000

    66.73

    -1.8%

  • NGG

    -0.8700

    86.37

    -1.01%

  • JRI

    -0.0400

    13.1

    -0.31%

  • GSK

    -0.2400

    50.66

    -0.47%

  • RBGPF

    0.0000

    61

    0%

  • RIO

    2.1000

    111.6

    +1.88%

  • CMSC

    -0.0100

    23.1

    -0.04%

  • BCE

    0.0650

    24.535

    +0.26%

  • RELX

    -1.5200

    31.25

    -4.86%

  • RYCEF

    -0.0800

    16

    -0.5%

  • BTI

    1.1200

    64.76

    +1.73%

  • AZN

    0.1700

    184.71

    +0.09%

  • VOD

    0.2550

    15.35

    +1.66%

  • BP

    -0.3200

    44.08

    -0.73%

  • CMSD

    -0.0450

    23.555

    -0.19%

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks
'Happy (and safe) shooting!': Study says AI chatbots help plot attacks / Photo: © AFP

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.

Text size:

Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.

The chatbots, it added, had become a "powerful accelerant for harm."

"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.

"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."

Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.

In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"

In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."

Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.

The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.

"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.

"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."

AFP reached out to the AI companies for comment.

"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.

"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."

The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.

The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.

O.Tse--ThChM