The China Mail - Grok spews misinformation about deadly Australia shooting

USD -
AED 3.672504
AFN 66.340342
ALL 82.106419
AMD 381.544224
ANG 1.790403
AOA 916.999589
ARS 1450.212903
AUD 1.51101
AWG 1.8
AZN 1.716238
BAM 1.664936
BBD 2.016864
BDT 122.371669
BGN 1.66907
BHD 0.377044
BIF 2969.098493
BMD 1
BND 1.291053
BOB 6.919213
BRL 5.504201
BSD 1.001366
BTN 91.000255
BWP 13.225504
BYN 2.934549
BYR 19600
BZD 2.01397
CAD 1.37891
CDF 2250.000075
CHF 0.796655
CLF 0.023329
CLP 915.219683
CNY 7.04195
CNH 7.039004
COP 3840.98
CRC 499.702052
CUC 1
CUP 26.5
CVE 93.866519
CZK 20.78905
DJF 178.318627
DKK 6.37812
DOP 64.339831
DZD 129.445978
EGP 47.570901
ERN 15
ETB 155.450668
EUR 0.85363
FJD 2.279497
FKP 0.744905
GBP 0.75007
GEL 2.695005
GGP 0.744905
GHS 11.516132
GIP 0.744905
GMD 73.479026
GNF 8707.755172
GTQ 7.668341
GYD 209.500298
HKD 7.779265
HNL 26.382906
HRK 6.434102
HTG 131.139865
HUF 330.728503
IDR 16696.6
ILS 3.22057
IMP 0.744905
INR 90.388698
IQD 1311.829879
IRR 42122.496828
ISK 126.339768
JEP 0.744905
JMD 160.721886
JOD 0.709025
JPY 155.561979
KES 128.901663
KGS 87.449832
KHR 4009.534349
KMF 419.999639
KPW 900.011412
KRW 1477.569746
KWD 0.30691
KYD 0.834514
KZT 516.168027
LAK 21694.993168
LBP 89673.319457
LKR 309.986848
LRD 177.245254
LSL 16.816195
LTL 2.95274
LVL 0.60489
LYD 5.425238
MAD 9.163701
MDL 16.863101
MGA 4523.708181
MKD 52.530968
MMK 2100.219412
MNT 3548.424678
MOP 8.023955
MRU 39.714821
MUR 46.049858
MVR 15.410099
MWK 1736.358219
MXN 17.97371
MYR 4.088502
MZN 63.910287
NAD 16.816195
NGN 1455.889763
NIO 36.851962
NOK 10.21785
NPR 145.600579
NZD 1.731525
OMR 0.384497
PAB 1.001362
PEN 3.373202
PGK 4.257257
PHP 58.666032
PKR 280.63591
PLN 3.59755
PYG 6726.001217
QAR 3.65106
RON 4.347302
RSD 100.201963
RUB 80.426732
RWF 1457.989274
SAR 3.750587
SBD 8.163401
SCR 13.492548
SDG 601.503701
SEK 9.335975
SGD 1.29204
SHP 0.750259
SLE 23.803701
SLL 20969.503664
SOS 572.316336
SRD 38.677992
STD 20697.981008
STN 20.856389
SVC 8.762274
SYP 11057.156336
SZL 16.801808
THB 31.515499
TJS 9.202605
TMT 3.51
TND 2.924236
TOP 2.40776
TRY 42.719101
TTD 6.793253
TWD 31.412498
TZS 2476.451018
UAH 42.230357
UGX 3565.165574
UYU 39.17596
UZS 12141.823444
VES 273.244101
VND 26335
VUV 121.327724
WST 2.791029
XAF 558.403848
XAG 0.015167
XAU 0.000232
XCD 2.70255
XCG 1.804724
XDR 0.694475
XOF 558.406225
XPF 101.523793
YER 238.349896
ZAR 16.73995
ZMK 9001.200677
ZMW 23.006823
ZWL 321.999592
  • SCS

    0.0200

    16.14

    +0.12%

  • CMSD

    -0.1200

    23.26

    -0.52%

  • CMSC

    -0.0290

    23.311

    -0.12%

  • JRI

    -0.0500

    13.46

    -0.37%

  • BCE

    -0.0800

    23.25

    -0.34%

  • NGG

    1.0500

    76.82

    +1.37%

  • BCC

    -0.1450

    75.695

    -0.19%

  • RIO

    1.2400

    77.23

    +1.61%

  • GSK

    0.5000

    49.28

    +1.01%

  • BTI

    0.2120

    57.502

    +0.37%

  • RBGPF

    0.4100

    82.01

    +0.5%

  • AZN

    -0.3700

    90.98

    -0.41%

  • RYCEF

    0.1700

    14.97

    +1.14%

  • BP

    0.5200

    34.28

    +1.52%

  • RELX

    0.1000

    40.92

    +0.24%

  • VOD

    0.1450

    12.845

    +1.13%

Grok spews misinformation about deadly Australia shooting
Grok spews misinformation about deadly Australia shooting / Photo: © AFP

Grok spews misinformation about deadly Australia shooting

Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday.

Text size:

The episode highlights how chatbots often deliver confident yet false responses during fast-developing news events, fueling information chaos as online platforms scale back human fact-checking and content moderation.

The attack during a Jewish festival on Sunday in the beach suburb of Sydney was one of Australia's worst mass shootings, leaving 15 people dead and dozens wounded.

Among the falsehoods Grok circulated was its repeated misidentification of Ahmed al Ahmed, who was widely hailed as a Bondi Beach hero after he risked his life to wrest a gun from one of the attackers.

In one post reviewed by AFP, Grok claimed the verified clip of the confrontation was "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it," suggesting it "may be staged."

Citing credible media sources such as CNN, Grok separately misidentified an image of Ahmed as that of an Israeli hostage held by the Palestinian militant group Hamas for more than 700 days.

When asked about another scene from the attack, Grok incorrectly claimed it was footage from tropical "cyclone Alfred," which generated heavy weather across the Australian coast earlier this year.

Only after another user pressed the chatbot to re-evaluate its answer did Grok backpedal and acknowledge the footage was from the Bondi Beach shooting.

When reached for comment by AFP, Grok-developer xAI responded only with an auto generated reply: "Legacy Media Lies."

- 'Crisis actor' -

The misinformation underscores what researchers say is the unreliability of AI chatbots as a fact-checking tool.

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities.

In the aftermath of the Sydney attack, online users circulated an authentic image of one of the survivors, falsely claiming he was a "crisis actor," disinformation watchdog NewsGuard reported.

Crisis actor is a derogatory label used by conspiracy theorists to allege that someone is deceiving the public -- feigning injuries or death -- while posing as a victim of a tragic event.

Online users questioned the authenticity of a photo of the survivor with blood on his face, sharing a response from Grok that falsely labeled the image as "staged" or "fake."

NewsGuard also reported that some users circulated an AI image -- created with Google's Nano Banana Pro model -- depicting red paint being applied on the survivor's face to pass off as blood, seemingly to bolster the false claim that he was a crisis actor.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity.

But they caution that they cannot replace the work of trained human fact-checkers.

In polarized societies, however, professional fact-checkers often face criticism from conservatives of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

I.Taylor--ThChM--ThChM