The China Mail - AI tools generate sexist content, warns UN

USD -
AED 3.6725
AFN 66.106128
ALL 82.462283
AMD 381.646874
ANG 1.790403
AOA 916.999545
ARS 1451.493901
AUD 1.49923
AWG 1.8025
AZN 1.69889
BAM 1.666106
BBD 2.015555
BDT 122.381003
BGN 1.6667
BHD 0.376969
BIF 2960.464106
BMD 1
BND 1.286514
BOB 6.930128
BRL 5.515498
BSD 1.000707
BTN 90.075562
BWP 13.139445
BYN 2.939776
BYR 19600
BZD 2.012659
CAD 1.372555
CDF 2164.999583
CHF 0.793565
CLF 0.022945
CLP 900.13991
CNY 6.996402
CNH 6.97704
COP 3769.96
CRC 497.073782
CUC 1
CUP 26.5
CVE 93.933689
CZK 20.586899
DJF 177.719966
DKK 6.36617
DOP 63.090461
DZD 129.565162
EGP 47.7078
ERN 15
ETB 155.306806
EUR 0.85232
FJD 2.273302
FKP 0.741981
GBP 0.74363
GEL 2.694963
GGP 0.741981
GHS 10.508067
GIP 0.741981
GMD 74.000247
GNF 8754.802491
GTQ 7.675532
GYD 209.36909
HKD 7.78393
HNL 26.382819
HRK 6.414503
HTG 130.968506
HUF 327.71999
IDR 16694
ILS 3.186885
IMP 0.741981
INR 89.986897
IQD 1310.962883
IRR 42124.999787
ISK 125.469873
JEP 0.741981
JMD 159.029535
JOD 0.70898
JPY 156.876016
KES 129.090012
KGS 87.443501
KHR 4009.813693
KMF 419.999716
KPW 900.043914
KRW 1444.639978
KWD 0.30769
KYD 0.833994
KZT 507.398605
LAK 21633.571009
LBP 89616.523195
LKR 309.880992
LRD 178.128754
LSL 16.565363
LTL 2.95274
LVL 0.60489
LYD 5.41968
MAD 9.125364
MDL 16.842652
MGA 4593.353608
MKD 52.457549
MMK 2099.836459
MNT 3559.101845
MOP 8.023887
MRU 39.738642
MUR 46.249823
MVR 15.449757
MWK 1735.285849
MXN 18.022855
MYR 4.058039
MZN 63.909696
NAD 16.565293
NGN 1445.369727
NIO 36.826906
NOK 10.08779
NPR 144.120729
NZD 1.738325
OMR 0.384498
PAB 1.000716
PEN 3.366031
PGK 4.262823
PHP 58.878498
PKR 280.231968
PLN 3.596297
PYG 6569.722371
QAR 3.640127
RON 4.3408
RSD 99.96038
RUB 79.099677
RWF 1458.083093
SAR 3.750501
SBD 8.136831
SCR 13.817013
SDG 601.501981
SEK 9.22704
SGD 1.28666
SHP 0.750259
SLE 24.050069
SLL 20969.503664
SOS 570.932045
SRD 38.1265
STD 20697.981008
STN 20.871136
SVC 8.756506
SYP 11059.149576
SZL 16.560607
THB 31.488021
TJS 9.241824
TMT 3.51
TND 2.91815
TOP 2.40776
TRY 42.955704
TTD 6.802286
TWD 31.384502
TZS 2470.315981
UAH 42.338589
UGX 3623.089636
UYU 39.186789
UZS 12013.255301
VES 297.770445
VND 26300
VUV 120.744286
WST 2.776281
XAF 558.798674
XAG 0.013939
XAU 0.000231
XCD 2.70255
XCG 1.803607
XDR 0.694966
XOF 558.798674
XPF 101.595577
YER 238.450454
ZAR 16.57019
ZMK 9001.197453
ZMW 22.191554
ZWL 321.999592
  • SCS

    0.0200

    16.14

    +0.12%

  • JRI

    0.0300

    13.61

    +0.22%

  • NGG

    -0.4200

    77.35

    -0.54%

  • BCE

    0.2500

    23.82

    +1.05%

  • CMSD

    0.0200

    23.15

    +0.09%

  • RIO

    -0.4900

    80.03

    -0.61%

  • BCC

    -0.1900

    73.6

    -0.26%

  • RBGPF

    0.3400

    81.05

    +0.42%

  • CMSC

    -0.0334

    22.65

    -0.15%

  • GSK

    -0.2600

    49.04

    -0.53%

  • BTI

    0.0700

    56.62

    +0.12%

  • AZN

    -0.5800

    91.93

    -0.63%

  • RYCEF

    0.0500

    15.5

    +0.32%

  • BP

    -0.0200

    34.73

    -0.06%

  • RELX

    -0.6900

    40.42

    -1.71%

  • VOD

    -0.0200

    13.21

    -0.15%

AI tools generate sexist content, warns UN
AI tools generate sexist content, warns UN / Photo: © AFP/File

AI tools generate sexist content, warns UN

The world's most popular AI tools are powered by programs from OpenAI and Meta that show prejudice against women, according to a study launched on Thursday by the UN's cultural organisation UNESCO.

Text size:

The biggest players in the multibillion-dollar AI field train their algorithms on vast amounts of data largely pulled from the internet, which enables their tools to write in the style of Oscar Wilde or create Salvador Dali-inspired images.

But their outputs have often been criticised for reflecting racial and sexist stereotypes, as well as using copyrighted material without permission.

UNESCO experts tested Meta's Llama 2 algorithm and OpenAI's GPT-2 and GPT-3.5, the program that powers the free version of popular chatbot ChatGPT.

The study found that each algorithm -- known in the industry as Large Language Models (LLMs) -- showed "unequivocal evidence of prejudice against women".

The programs generated texts that associated women's names with words such as "home", "family" or "children", but men's names were linked with "business", "salary" or "career".

While men were portrayed in high-status jobs like teachers, lawyers and doctors, women were frequently prostitutes, cooks or domestic servants.

GPT-3.5 was found to be less biased than the other two models.

However, the authors praised Llama 2 and GPT-2 for being open source, allowing these problems to be scrutinised, unlike GPT-3.5, which is a closed model.

AI companies "are really not serving all of their users", Leona Verdadero, a UNESCO specialist in digital policies, told AFP.

Audrey Azoulay, UNESCO's director general, said the general public were increasingly using AI tools in their everyday lives.

"These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world," she said.

UNESCO, releasing the report to mark International Women's Day, recommended AI companies hire more women and minorities and called on governments to ensure ethical AI through regulation.

C.Smith--ThChM