The China Mail - Firms and researchers at odds over superhuman AI

USD -
AED 3.672504
AFN 66.379449
ALL 81.856268
AMD 381.460103
ANG 1.790403
AOA 917.000011
ARS 1450.462977
AUD 1.491335
AWG 1.80025
AZN 1.698291
BAM 1.658674
BBD 2.014358
BDT 122.21671
BGN 1.6605
BHD 0.377225
BIF 2957.76141
BMD 1
BND 1.284077
BOB 6.926234
BRL 5.521501
BSD 1.00014
BTN 89.856547
BWP 13.14687
BYN 2.919259
BYR 19600
BZD 2.011466
CAD 1.367605
CDF 2200.000277
CHF 0.788565
CLF 0.023065
CLP 904.839701
CNY 7.028499
CNH 7.00831
COP 3743.8
CRC 499.518715
CUC 1
CUP 26.5
CVE 93.513465
CZK 20.600098
DJF 177.720217
DKK 6.343725
DOP 62.690023
DZD 129.439714
EGP 47.548496
ERN 15
ETB 155.604932
EUR 0.84928
FJD 2.269206
FKP 0.741553
GBP 0.740975
GEL 2.68498
GGP 0.741553
GHS 11.126753
GIP 0.741553
GMD 74.502446
GNF 8741.153473
GTQ 7.662397
GYD 209.237241
HKD 7.776215
HNL 26.362545
HRK 6.397502
HTG 130.951927
HUF 330.13797
IDR 16729.15
ILS 3.186051
IMP 0.741553
INR 89.82965
IQD 1310.19773
IRR 42125.000032
ISK 125.698917
JEP 0.741553
JMD 159.532199
JOD 0.708958
JPY 156.016038
KES 128.949983
KGS 87.449982
KHR 4008.85391
KMF 417.999917
KPW 900.017709
KRW 1444.450346
KWD 0.30719
KYD 0.833489
KZT 514.029352
LAK 21644.588429
LBP 89561.205624
LKR 309.599834
LRD 177.018844
LSL 16.645168
LTL 2.95274
LVL 0.60489
LYD 5.412442
MAD 9.124909
MDL 16.777482
MGA 4573.672337
MKD 52.285777
MMK 2099.828827
MNT 3555.150915
MOP 8.011093
MRU 39.604456
MUR 45.94957
MVR 15.449981
MWK 1734.230032
MXN 17.93969
MYR 4.044952
MZN 63.909872
NAD 16.645168
NGN 1450.45006
NIO 36.806642
NOK 10.006865
NPR 143.770645
NZD 1.71416
OMR 0.384496
PAB 1.000136
PEN 3.365433
PGK 4.319268
PHP 58.787497
PKR 280.16122
PLN 3.579481
PYG 6777.849865
QAR 3.645469
RON 4.325201
RSD 99.566018
RUB 78.999707
RWF 1456.65485
SAR 3.750695
SBD 8.153391
SCR 15.233419
SDG 601.495856
SEK 9.171285
SGD 1.284155
SHP 0.750259
SLE 24.074983
SLL 20969.503664
SOS 570.585342
SRD 38.335501
STD 20697.981008
STN 20.777943
SVC 8.75133
SYP 11056.879194
SZL 16.631683
THB 31.069917
TJS 9.19119
TMT 3.51
TND 2.909675
TOP 2.40776
TRY 42.846198
TTD 6.803263
TWD 31.442297
TZS 2473.447014
UAH 42.191946
UGX 3610.273633
UYU 39.087976
UZS 12053.751267
VES 288.088835
VND 26320
VUV 121.140543
WST 2.788621
XAF 556.301203
XAG 0.013898
XAU 0.000223
XCD 2.70255
XCG 1.802508
XDR 0.691025
XOF 556.303562
XPF 101.141939
YER 238.449905
ZAR 16.667502
ZMK 9001.203383
ZMW 22.577472
ZWL 321.999592
  • SCS

    0.0200

    16.14

    +0.12%

  • NGG

    0.2500

    77.49

    +0.32%

  • JRI

    0.0600

    13.47

    +0.45%

  • BCC

    1.4800

    74.71

    +1.98%

  • AZN

    0.3100

    92.45

    +0.34%

  • BTI

    0.2000

    57.24

    +0.35%

  • RIO

    -0.0800

    80.89

    -0.1%

  • RYCEF

    -0.0300

    15.53

    -0.19%

  • BCE

    0.2800

    23.01

    +1.22%

  • GSK

    0.1100

    48.96

    +0.22%

  • BP

    -0.2700

    34.31

    -0.79%

  • RBGPF

    0.0000

    81.26

    0%

  • CMSC

    0.0100

    23.02

    +0.04%

  • CMSD

    0.1200

    23.14

    +0.52%

  • RELX

    -0.0400

    41.09

    -0.1%

  • VOD

    0.0400

    13.1

    +0.31%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

A.Sun--ThChM