The China Mail - AI chatbots give bad health advice, research finds

USD -
AED 3.672501
AFN 65.000199
ALL 81.25221
AMD 377.970239
ANG 1.79008
AOA 916.999871
ARS 1431.316102
AUD 1.41224
AWG 1.8025
AZN 1.70377
BAM 1.646747
BBD 2.012849
BDT 122.13779
BGN 1.67937
BHD 0.377017
BIF 2957.159456
BMD 1
BND 1.268203
BOB 6.920331
BRL 5.202609
BSD 0.999352
BTN 90.600003
BWP 13.170436
BYN 2.880286
BYR 19600
BZD 2.009919
CAD 1.35806
CDF 2199.999931
CHF 0.767302
CLF 0.021643
CLP 854.629826
CNY 6.93895
CNH 6.91671
COP 3680.95
CRC 495.427984
CUC 1
CUP 26.5
CVE 92.841055
CZK 20.32555
DJF 177.96339
DKK 6.268725
DOP 62.913099
DZD 129.466972
EGP 46.862976
ERN 15
ETB 155.88032
EUR 0.83916
FJD 2.190594
FKP 0.735168
GBP 0.73238
GEL 2.694984
GGP 0.735168
GHS 10.998097
GIP 0.735168
GMD 73.000171
GNF 8773.443914
GTQ 7.666239
GYD 209.083408
HKD 7.814445
HNL 26.398747
HRK 6.317002
HTG 131.056026
HUF 316.210018
IDR 16801.15
ILS 3.08924
IMP 0.735168
INR 90.67025
IQD 1309.202051
IRR 42125.000158
ISK 121.6903
JEP 0.735168
JMD 156.313806
JOD 0.709001
JPY 155.725504
KES 128.950256
KGS 87.449976
KHR 4030.614822
KMF 418.999929
KPW 899.993603
KRW 1457.934986
KWD 0.30689
KYD 0.832814
KZT 493.541923
LAK 21477.436819
LBP 89494.552313
LKR 309.311509
LRD 185.885751
LSL 16.017682
LTL 2.95274
LVL 0.60489
LYD 6.318253
MAD 9.139958
MDL 16.974555
MGA 4387.600881
MKD 51.726887
MMK 2099.674626
MNT 3566.287566
MOP 8.045737
MRU 39.684257
MUR 45.980108
MVR 15.450228
MWK 1732.903356
MXN 17.17654
MYR 3.934502
MZN 63.749962
NAD 16.017682
NGN 1357.829805
NIO 36.777738
NOK 9.58189
NPR 144.959837
NZD 1.652899
OMR 0.38449
PAB 0.999356
PEN 3.35639
PGK 4.347991
PHP 58.426977
PKR 279.449595
PLN 3.53305
PYG 6589.344728
QAR 3.643
RON 4.271901
RSD 98.519014
RUB 77.39937
RWF 1459.087618
SAR 3.750614
SBD 8.058149
SCR 13.856617
SDG 601.50654
SEK 8.93125
SGD 1.265785
SHP 0.750259
SLE 24.450154
SLL 20969.499267
SOS 570.112659
SRD 37.971496
STD 20697.981008
STN 20.628626
SVC 8.744817
SYP 11059.574895
SZL 16.010474
THB 31.123007
TJS 9.359244
TMT 3.505
TND 2.886817
TOP 2.40776
TRY 43.594401
TTD 6.770456
TWD 31.541026
TZS 2583.596971
UAH 43.079799
UGX 3557.370493
UYU 38.318564
UZS 12295.451197
VES 377.985125
VND 25910
VUV 119.675943
WST 2.73072
XAF 552.310426
XAG 0.012258
XAU 0.000199
XCD 2.70255
XCG 1.801105
XDR 0.689856
XOF 552.30345
XPF 100.414676
YER 238.399323
ZAR 15.91755
ZMK 9001.199361
ZMW 18.893454
ZWL 321.999592
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • NGG

    -0.2100

    87.85

    -0.24%

  • CMSD

    0.0200

    23.97

    +0.08%

  • GSK

    -1.5200

    58.71

    -2.59%

  • AZN

    -3.9450

    189.085

    -2.09%

  • RIO

    2.7000

    96.11

    +2.81%

  • RYCEF

    0.4600

    17.34

    +2.65%

  • CMSC

    -0.0120

    23.56

    -0.05%

  • VOD

    0.3350

    15.445

    +2.17%

  • BCE

    0.4200

    25.5

    +1.65%

  • RELX

    -0.1950

    29.185

    -0.67%

  • BCC

    -0.2750

    90.755

    -0.3%

  • JRI

    -0.0900

    12.88

    -0.7%

  • BTI

    -1.8600

    60.94

    -3.05%

  • BP

    0.2100

    39.22

    +0.54%

AI chatbots give bad health advice, research finds
AI chatbots give bad health advice, research finds / Photo: © GETTY IMAGES NORTH AMERICA/AFP/File

AI chatbots give bad health advice, research finds

Next time you're considering consulting Dr ChatGPT, perhaps think again.

Text size:

Despite now being able to ace most medical licensing exams, artificial intelligence chatbots do not give humans better health advice than they can find using more traditional methods, according to a study published on Monday.

"Despite all the hype, AI just isn't ready to take on the role of the physician," study co-author Rebecca Payne from Oxford University said.

"Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed," she added in a statement.

The British-led team of researchers wanted to find out how successful humans are when they use chatbots to identify their health problems and whether they require seeing a doctor or going to hospital.

The team presented nearly 1,300 UK-based participants with 10 different scenarios, such as a headache after a night out drinking, a new mother feeling exhausted or what having gallstones feels like.

Then the researchers randomly assigned the participants one of three chatbots: OpenAI's GPT-4o, Meta's Llama 3 or Command R+. There was also a control group that used internet search engines.

People using the AI chatbots were only able to identify their health problem around a third of the time, while only around 45 percent figured out the right course of action.

This was no better than the control group, according to the study, published in the Nature Medicine journal.

- Communication breakdown -

The researchers pointed out the disparity between these disappointing results and how AI chatbots score extremely highly on medical benchmarks and exams, blaming the gap on a communication breakdown.

Unlike the simulated patient interactions often used to test AI, the real humans often did not give the chatbots all the relevant information.

And sometimes the humans struggled to interpret the options offered by the chatbot, or misunderstood or simply ignored its advice.

One out of every six US adults ask AI chatbots about health information at least once a month, the researchers said, with that number expected to increase as more people adopt the new technology.

"This is a very important study as it highlights the real medical risks posed to the public by chatbots," David Shaw, a bioethicist at Maastricht University in the Netherlands who was not involved in the research, told AFP.

He advised people to only trust medical information from reliable sources, such as the UK's National Health Service.

K.Lam--ThChM