The China Mail - Firms and researchers at odds over superhuman AI

USD -
AED 3.67305
AFN 66.496721
ALL 83.872087
AMD 382.480316
ANG 1.789982
AOA 917.000151
ARS 1450.743722
AUD 1.543543
AWG 1.805
AZN 1.721313
BAM 1.69722
BBD 2.01352
BDT 122.007836
BGN 1.69435
BHD 0.376961
BIF 2952.5
BMD 1
BND 1.304378
BOB 6.907594
BRL 5.350197
BSD 0.999679
BTN 88.558647
BWP 13.450775
BYN 3.407125
BYR 19600
BZD 2.010578
CAD 1.41132
CDF 2154.999794
CHF 0.806245
CLF 0.024029
CLP 942.659758
CNY 7.11935
CNH 7.122085
COP 3784.25
CRC 502.442792
CUC 1
CUP 26.5
CVE 95.849785
CZK 21.08085
DJF 177.720149
DKK 6.46669
DOP 64.301661
DZD 130.471267
EGP 47.303968
ERN 15
ETB 153.49263
EUR 0.86605
FJD 2.28525
FKP 0.766404
GBP 0.76133
GEL 2.715005
GGP 0.766404
GHS 10.92632
GIP 0.766404
GMD 73.510149
GNF 8677.881382
GTQ 7.6608
GYD 209.15339
HKD 7.774805
HNL 26.286056
HRK 6.524997
HTG 130.827172
HUF 334.350298
IDR 16686.5
ILS 3.261445
IMP 0.766404
INR 88.675601
IQD 1309.660176
IRR 42112.499919
ISK 126.620161
JEP 0.766404
JMD 160.35857
JOD 0.709006
JPY 153.072498
KES 129.14997
KGS 87.450262
KHR 4012.669762
KMF 420.999708
KPW 900.033283
KRW 1448.119782
KWD 0.306898
KYD 0.833167
KZT 526.13127
LAK 21717.265947
LBP 89523.367365
LKR 304.861328
LRD 182.946302
LSL 17.373217
LTL 2.95274
LVL 0.60489
LYD 5.466197
MAD 9.311066
MDL 17.114592
MGA 4508.159378
MKD 53.394772
MMK 2099.044592
MNT 3585.031206
MOP 8.005051
MRU 39.997917
MUR 45.999381
MVR 15.405019
MWK 1733.486063
MXN 18.57444
MYR 4.18297
MZN 63.960351
NAD 17.373217
NGN 1438.169534
NIO 36.78522
NOK 10.201703
NPR 141.693568
NZD 1.774497
OMR 0.384501
PAB 0.999779
PEN 3.375927
PGK 4.279045
PHP 58.997504
PKR 282.679805
PLN 3.68034
PYG 7081.988268
QAR 3.643566
RON 4.403984
RSD 101.501994
RUB 81.251088
RWF 1452.596867
SAR 3.750504
SBD 8.223823
SCR 15.060272
SDG 600.496692
SEK 9.5646
SGD 1.304202
SHP 0.750259
SLE 23.197134
SLL 20969.499529
SOS 571.349231
SRD 38.503497
STD 20697.981008
STN 21.260533
SVC 8.747304
SYP 11056.895466
SZL 17.359159
THB 32.399408
TJS 9.227278
TMT 3.5
TND 2.959939
TOP 2.342104
TRY 42.099355
TTD 6.773954
TWD 30.984983
TZS 2459.806975
UAH 42.066455
UGX 3491.096532
UYU 39.813947
UZS 11966.746503
VES 227.27225
VND 26315
VUV 122.169446
WST 2.82328
XAF 569.234174
XAG 0.020825
XAU 0.000251
XCD 2.70255
XCG 1.801686
XDR 0.70875
XOF 569.231704
XPF 103.489719
YER 238.483762
ZAR 17.37062
ZMK 9001.20436
ZMW 22.61803
ZWL 321.999592
  • RBGPF

    0.0000

    76

    0%

  • CMSC

    -0.0500

    23.78

    -0.21%

  • GSK

    0.4100

    47.1

    +0.87%

  • NGG

    0.9200

    76.29

    +1.21%

  • BCC

    -0.6500

    70.73

    -0.92%

  • SCS

    -0.1700

    15.76

    -1.08%

  • RIO

    0.2100

    69.27

    +0.3%

  • RELX

    -1.1900

    43.39

    -2.74%

  • BTI

    0.3300

    54.21

    +0.61%

  • AZN

    2.6200

    83.77

    +3.13%

  • JRI

    -0.0200

    13.75

    -0.15%

  • RYCEF

    0.0600

    15

    +0.4%

  • CMSD

    0.0000

    24.01

    0%

  • BP

    0.1400

    35.82

    +0.39%

  • BCE

    0.7800

    23.17

    +3.37%

  • VOD

    0.0700

    11.34

    +0.62%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

A.Sun--ThChM