The China Mail - Firms and researchers at odds over superhuman AI

USD -
AED 3.673042
AFN 70.503991
ALL 85.403989
AMD 383.550403
ANG 1.789699
AOA 917.000367
ARS 1354.222596
AUD 1.54585
AWG 1.8025
AZN 1.70397
BAM 1.713247
BBD 2.018439
BDT 122.209083
BGN 1.688945
BHD 0.374962
BIF 2942.5
BMD 1
BND 1.298031
BOB 6.908
BRL 5.541704
BSD 0.999759
BTN 87.434466
BWP 13.715262
BYN 3.271533
BYR 19600
BZD 2.008103
CAD 1.38005
CDF 2890.000362
CHF 0.803904
CLF 0.024709
CLP 969.330396
CNY 7.211804
CNH 7.19286
COP 4124.99
CRC 505.09165
CUC 1
CUP 26.5
CVE 96.02504
CZK 21.201404
DJF 177.720393
DKK 6.439804
DOP 60.750393
DZD 130.142814
EGP 48.338726
ERN 15
ETB 138.150392
EUR 0.86255
FJD 2.26104
FKP 0.756365
GBP 0.752955
GEL 2.703861
GGP 0.756365
GHS 10.503856
GIP 0.756365
GMD 72.503851
GNF 8675.000355
GTQ 7.6728
GYD 209.14964
HKD 7.84947
HNL 26.350388
HRK 6.500604
HTG 130.871822
HUF 344.13504
IDR 16367.95
ILS 3.41787
IMP 0.756365
INR 87.166904
IQD 1310
IRR 42112.503816
ISK 123.430386
JEP 0.756365
JMD 160.357401
JOD 0.70904
JPY 147.38404
KES 129.503801
KGS 87.450384
KHR 4015.00035
KMF 427.503794
KPW 899.980278
KRW 1389.030383
KWD 0.30526
KYD 0.83306
KZT 542.539912
LAK 21600.000349
LBP 89550.000349
LKR 301.206666
LRD 201.000348
LSL 18.10377
LTL 2.95274
LVL 0.60489
LYD 5.455039
MAD 9.086504
MDL 17.214813
MGA 4430.000347
MKD 53.925498
MMK 2098.469766
MNT 3591.435698
MOP 8.082518
MRU 39.820379
MUR 46.803741
MVR 15.403739
MWK 1736.503736
MXN 18.85725
MYR 4.277504
MZN 63.960377
NAD 18.103727
NGN 1533.980377
NIO 36.750377
NOK 10.242265
NPR 139.89532
NZD 1.690488
OMR 0.381948
PAB 0.999672
PEN 3.694504
PGK 4.13025
PHP 57.766038
PKR 283.250374
PLN 3.68625
PYG 7487.900488
QAR 3.64075
RON 4.380304
RSD 101.789038
RUB 79.88758
RWF 1440
SAR 3.751106
SBD 8.264604
SCR 14.156038
SDG 600.503676
SEK 9.65375
SGD 1.289904
SHP 0.785843
SLE 23.000338
SLL 20969.503947
SOS 571.503662
SRD 36.84037
STD 20697.981008
STN 21.7
SVC 8.74741
SYP 13001.991551
SZL 18.103649
THB 32.360369
TJS 9.431969
TMT 3.51
TND 2.894504
TOP 2.342104
TRY 40.645204
TTD 6.775727
TWD 29.709038
TZS 2539.612038
UAH 41.788813
UGX 3583.645402
UYU 40.16117
UZS 12760.000334
VES 123.49336
VND 26220
VUV 120.138643
WST 2.771841
XAF 574.607012
XAG 0.027014
XAU 0.000297
XCD 2.70255
XCG 1.801721
XDR 0.69341
XOF 573.000332
XPF 105.503591
YER 240.603589
ZAR 18.043037
ZMK 9001.203584
ZMW 22.86753
ZWL 321.999592
  • RBGPF

    0.5200

    74.94

    +0.69%

  • CMSC

    0.0200

    22.87

    +0.09%

  • RYCEF

    0.0200

    14.2

    +0.14%

  • SCU

    0.0000

    12.72

    0%

  • SCS

    -0.1500

    10.18

    -1.47%

  • NGG

    1.4300

    71.82

    +1.99%

  • VOD

    0.1500

    10.96

    +1.37%

  • GSK

    0.4100

    37.56

    +1.09%

  • RIO

    -0.1200

    59.65

    -0.2%

  • BP

    -0.4000

    31.75

    -1.26%

  • CMSD

    0.0800

    23.35

    +0.34%

  • BTI

    0.6700

    54.35

    +1.23%

  • RELX

    -0.3000

    51.59

    -0.58%

  • JRI

    -0.0300

    13.1

    -0.23%

  • BCC

    -0.4600

    83.35

    -0.55%

  • BCE

    0.2400

    23.57

    +1.02%

  • AZN

    0.8600

    73.95

    +1.16%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

A.Sun--ThChM