The China Mail - Hey chatbot, is this true? AI 'factchecks' sow misinformation

USD -
AED 3.672498
AFN 68.253087
ALL 83.11189
AMD 382.193361
ANG 1.789783
AOA 916.99985
ARS 1296.395062
AUD 1.535403
AWG 1.80075
AZN 1.705074
BAM 1.671124
BBD 2.016064
BDT 121.314137
BGN 1.671124
BHD 0.376469
BIF 2977.656257
BMD 1
BND 1.280215
BOB 6.899645
BRL 5.400604
BSD 0.998505
BTN 87.326014
BWP 13.362669
BYN 3.331055
BYR 19600
BZD 2.005639
CAD 1.381345
CDF 2895.000142
CHF 0.80684
CLF 0.024576
CLP 964.102833
CNY 7.182097
CNH 7.18821
COP 4046.91
CRC 504.549921
CUC 1
CUP 26.5
CVE 94.215406
CZK 20.90895
DJF 177.810057
DKK 6.377302
DOP 61.460247
DZD 129.567223
EGP 48.264095
ERN 15
ETB 140.628786
EUR 0.854415
FJD 2.255898
FKP 0.737781
GBP 0.73775
GEL 2.689909
GGP 0.737781
GHS 10.833511
GIP 0.737781
GMD 72.501722
GNF 8657.239287
GTQ 7.658393
GYD 208.817875
HKD 7.82526
HNL 26.13748
HRK 6.436502
HTG 130.653223
HUF 337.623501
IDR 16203
ILS 3.38481
IMP 0.737781
INR 87.513498
IQD 1307.984791
IRR 42112.500758
ISK 122.349518
JEP 0.737781
JMD 159.772718
JOD 0.708995
JPY 147.402497
KES 128.999851
KGS 87.378795
KHR 3999.658222
KMF 420.496617
KPW 900.000002
KRW 1388.629879
KWD 0.30547
KYD 0.832059
KZT 540.872389
LAK 21611.483744
LBP 89415.132225
LKR 300.542573
LRD 200.196522
LSL 17.559106
LTL 2.95274
LVL 0.60489
LYD 5.400094
MAD 8.995172
MDL 16.64972
MGA 4442.260862
MKD 52.578289
MMK 2099.537865
MNT 3596.792519
MOP 8.046653
MRU 39.940189
MUR 45.640147
MVR 15.40998
MWK 1731.362413
MXN 18.723725
MYR 4.215014
MZN 63.902594
NAD 17.559106
NGN 1529.190073
NIO 36.741146
NOK 10.18954
NPR 139.721451
NZD 1.6859
OMR 0.384218
PAB 0.998505
PEN 3.559106
PGK 4.154313
PHP 57.06101
PKR 283.287734
PLN 3.638942
PYG 7312.342462
QAR 3.640364
RON 4.327099
RSD 100.123895
RUB 79.692505
RWF 1445.80681
SAR 3.752502
SBD 8.223773
SCR 14.949545
SDG 600.500052
SEK 9.554045
SGD 1.282855
SHP 0.785843
SLE 23.301031
SLL 20969.49797
SOS 570.598539
SRD 37.56003
STD 20697.981008
STN 20.933909
SVC 8.736703
SYP 13001.821653
SZL 17.553723
THB 32.43996
TJS 9.310975
TMT 3.51
TND 2.918187
TOP 2.342102
TRY 40.90224
TTD 6.774896
TWD 30.003969
TZS 2608.535974
UAH 41.211005
UGX 3554.492246
UYU 39.945316
UZS 12562.908532
VES 135.47035
VND 26270
VUV 119.143454
WST 2.766276
XAF 560.479344
XAG 0.026373
XAU 0.0003
XCD 2.70255
XCG 1.799547
XDR 0.697056
XOF 560.479344
XPF 101.901141
YER 240.274983
ZAR 17.589925
ZMK 9001.198309
ZMW 23.140086
ZWL 321.999592
  • RBGPF

    2.8400

    75.92

    +3.74%

  • CMSD

    0.0505

    23.34

    +0.22%

  • SCS

    -0.0500

    16.15

    -0.31%

  • BCC

    -0.6300

    85.99

    -0.73%

  • NGG

    -0.1300

    71.43

    -0.18%

  • AZN

    0.7000

    79.17

    +0.88%

  • GSK

    0.5581

    39.36

    +1.42%

  • RIO

    0.2000

    61.24

    +0.33%

  • BTI

    -0.2700

    57.15

    -0.47%

  • CMSC

    0.0300

    23.12

    +0.13%

  • RELX

    0.2700

    47.96

    +0.56%

  • JRI

    0.0835

    13.36

    +0.62%

  • BP

    0.1892

    34.33

    +0.55%

  • BCE

    0.2400

    25.61

    +0.94%

  • VOD

    0.0300

    11.67

    +0.26%

  • RYCEF

    -0.2100

    14.71

    -1.43%

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation / Photo: © AFP

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.

Text size:

With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information.

"Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media.

But the responses are often themselves riddled with misinformation.

Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India.

Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes.

"The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP.

"Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned.

- 'Fabricated' -

NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.

In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead."

When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken.

Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim.

In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real.

Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification.

The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.

Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods.

- 'Biased answers' -

Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject.

AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union.

The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control.

Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa.

When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit.

Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people.

"We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP.

"I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."

burs-ac/nl

D.Pan--ThChM