The China Mail - Firms and researchers at odds over superhuman AI

USD -
AED 3.672498
AFN 63.503463
ALL 83.463315
AMD 376.986282
ANG 1.790083
AOA 916.999701
ARS 1385.5001
AUD 1.455519
AWG 1.8
AZN 1.697717
BAM 1.699513
BBD 2.014051
BDT 122.697254
BGN 1.709309
BHD 0.377509
BIF 2970.416618
BMD 1
BND 1.287696
BOB 6.935386
BRL 5.249203
BSD 0.999996
BTN 94.787611
BWP 13.787859
BYN 2.976638
BYR 19600
BZD 2.011105
CAD 1.38957
CDF 2282.497331
CHF 0.79815
CLF 0.023381
CLP 923.220134
CNY 6.91185
CNH 6.910575
COP 3675.3
CRC 464.366558
CUC 1
CUP 26.5
CVE 95.823032
CZK 21.287398
DJF 178.063563
DKK 6.487585
DOP 59.522516
DZD 133.12557
EGP 53.60199
ERN 15
ETB 154.582495
EUR 0.868195
FJD 2.24025
FKP 0.752712
GBP 0.753015
GEL 2.679845
GGP 0.752712
GHS 10.957154
GIP 0.752712
GMD 73.496975
GNF 8767.699413
GTQ 7.653569
GYD 209.330315
HKD 7.83265
HNL 26.549649
HRK 6.542699
HTG 131.078738
HUF 337.827038
IDR 16992
ILS 3.13965
IMP 0.752712
INR 94.54595
IQD 1309.975365
IRR 1313250.000126
ISK 124.680163
JEP 0.752712
JMD 157.400126
JOD 0.709001
JPY 159.638505
KES 130.050221
KGS 87.450178
KHR 4004.935568
KMF 427.999997
KPW 900.00296
KRW 1515.180048
KWD 0.308023
KYD 0.833344
KZT 483.44391
LAK 21749.12344
LBP 89547.486737
LKR 314.996893
LRD 183.502503
LSL 17.171359
LTL 2.95274
LVL 0.60489
LYD 6.383247
MAD 9.346391
MDL 17.564303
MGA 4167.481307
MKD 53.547773
MMK 2098.832611
MNT 3571.142668
MOP 8.068492
MRU 39.926487
MUR 46.9159
MVR 15.449664
MWK 1733.901626
MXN 18.05465
MYR 4.019496
MZN 63.949773
NAD 17.171583
NGN 1382.179868
NIO 36.800007
NOK 9.73768
NPR 151.645993
NZD 1.74163
OMR 0.384435
PAB 1.000013
PEN 3.483403
PGK 4.321285
PHP 60.756974
PKR 279.086043
PLN 3.715515
PYG 6537.91845
QAR 3.646009
RON 4.4255
RSD 101.931978
RUB 81.502485
RWF 1460.256772
SAR 3.752499
SBD 8.042037
SCR 14.901688
SDG 600.999691
SEK 9.45515
SGD 1.28755
SHP 0.750259
SLE 24.550138
SLL 20969.510825
SOS 571.503052
SRD 37.600996
STD 20697.981008
STN 21.28926
SVC 8.74968
SYP 110.527654
SZL 17.169497
THB 32.779898
TJS 9.555322
TMT 3.5
TND 2.948402
TOP 2.40776
TRY 44.41694
TTD 6.794374
TWD 32.0145
TZS 2584.999806
UAH 43.831285
UGX 3725.347921
UYU 40.479004
UZS 12195.153743
VES 467.928355
VND 26335
VUV 119.385423
WST 2.775484
XAF 569.988487
XAG 0.014146
XAU 0.000221
XCD 2.70255
XCG 1.802248
XDR 0.708991
XOF 569.988487
XPF 103.633607
YER 238.59797
ZAR 17.06745
ZMK 9001.197652
ZMW 18.824133
ZWL 321.999592
  • RBGPF

    -13.5000

    69

    -19.57%

  • CMSC

    -0.0928

    22.21

    -0.42%

  • RYCEF

    0.6600

    14.95

    +4.41%

  • RIO

    3.8010

    92.621

    +4.1%

  • BCE

    0.0600

    25.29

    +0.24%

  • RELX

    0.4800

    33.23

    +1.44%

  • BP

    0.0550

    47.405

    +0.12%

  • GSK

    0.5800

    54.81

    +1.06%

  • AZN

    1.6900

    195.57

    +0.86%

  • NGG

    0.8000

    84.49

    +0.95%

  • BTI

    -0.1450

    58.115

    -0.25%

  • CMSD

    0.0850

    22.585

    +0.38%

  • BCC

    1.4750

    76.425

    +1.93%

  • VOD

    0.3100

    15.01

    +2.07%

  • JRI

    0.3300

    12.25

    +2.69%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

A.Sun--ThChM