The China Mail - 'Vibe hacking' puts chatbots to work for cybercriminals

USD -
AED 3.672504
AFN 66.344071
ALL 83.58702
AMD 382.869053
ANG 1.789982
AOA 917.000367
ARS 1405.057166
AUD 1.540832
AWG 1.805
AZN 1.70397
BAM 1.691481
BBD 2.013336
BDT 122.007014
BGN 1.69079
BHD 0.374011
BIF 2943.839757
BMD 1
BND 1.3018
BOB 6.91701
BRL 5.332404
BSD 0.999615
BTN 88.59887
BWP 13.420625
BYN 3.406804
BYR 19600
BZD 2.010326
CAD 1.40485
CDF 2150.000362
CHF 0.80538
CLF 0.024066
CLP 944.120396
CNY 7.11935
CNH 7.12515
COP 3780
CRC 501.883251
CUC 1
CUP 26.5
CVE 95.363087
CZK 21.009504
DJF 177.720393
DKK 6.457204
DOP 64.223754
DZD 129.411663
EGP 46.950698
ERN 15
ETB 154.306137
EUR 0.86435
FJD 2.28425
FKP 0.760233
GBP 0.759936
GEL 2.70504
GGP 0.760233
GHS 10.930743
GIP 0.760233
GMD 73.000355
GNF 8677.076622
GTQ 7.659909
GYD 209.133877
HKD 7.78025
HNL 26.282902
HRK 6.514104
HTG 133.048509
HUF 332.660388
IDR 16685.5
ILS 3.26205
IMP 0.760233
INR 88.639504
IQD 1309.474904
IRR 42100.000352
ISK 126.580386
JEP 0.760233
JMD 160.439
JOD 0.70904
JPY 153.43504
KES 129.203801
KGS 87.450384
KHR 4023.264362
KMF 421.00035
KPW 900.018268
KRW 1455.990383
KWD 0.306904
KYD 0.83302
KZT 524.767675
LAK 21703.220673
LBP 89512.834262
LKR 304.684561
LRD 182.526573
LSL 17.315523
LTL 2.95274
LVL 0.60489
LYD 5.458091
MAD 9.265955
MDL 17.042585
MGA 4492.856402
MKD 53.206947
MMK 2099.87471
MNT 3580.787673
MOP 8.007472
MRU 39.595594
MUR 45.910378
MVR 15.405039
MWK 1733.369658
MXN 18.451604
MYR 4.176039
MZN 63.950377
NAD 17.315148
NGN 1436.000344
NIO 36.782862
NOK 10.160376
NPR 141.758018
NZD 1.776515
OMR 0.38142
PAB 0.999671
PEN 3.37342
PGK 4.220486
PHP 58.805504
PKR 282.656184
PLN 3.665615
PYG 7072.77311
QAR 3.643196
RON 4.398804
RSD 102.170373
RUB 80.869377
RWF 1452.42265
SAR 3.750713
SBD 8.230592
SCR 13.652393
SDG 600.503676
SEK 9.529804
SGD 1.301038
SHP 0.750259
SLE 23.203667
SLL 20969.499529
SOS 571.228422
SRD 38.599038
STD 20697.981008
STN 21.189281
SVC 8.746265
SYP 11056.858374
SZL 17.321588
THB 32.395038
TJS 9.226139
TMT 3.51
TND 2.954772
TOP 2.342104
TRY 42.209038
TTD 6.77604
TWD 30.981804
TZS 2455.000335
UAH 41.915651
UGX 3498.408635
UYU 39.809213
UZS 12055.19496
VES 228.194038
VND 26310
VUV 122.303025
WST 2.820887
XAF 567.301896
XAG 0.020684
XAU 0.00025
XCD 2.70255
XCG 1.801521
XDR 0.707015
XOF 567.306803
XPF 103.14423
YER 238.503589
ZAR 17.303704
ZMK 9001.203584
ZMW 22.615629
ZWL 321.999592
  • SCS

    0.0000

    15.76

    0%

  • JRI

    -0.0100

    13.74

    -0.07%

  • CMSD

    0.0900

    24.1

    +0.37%

  • BCC

    -0.0900

    70.64

    -0.13%

  • NGG

    1.4600

    77.75

    +1.88%

  • GSK

    -0.4700

    46.63

    -1.01%

  • RIO

    0.0600

    69.33

    +0.09%

  • AZN

    0.8100

    84.58

    +0.96%

  • CMSC

    0.0700

    23.85

    +0.29%

  • RBGPF

    -0.7800

    75.22

    -1.04%

  • RELX

    -1.1200

    42.27

    -2.65%

  • BCE

    0.0200

    23.19

    +0.09%

  • VOD

    0.2400

    11.58

    +2.07%

  • BTI

    0.3800

    54.59

    +0.7%

  • BP

    0.7600

    36.58

    +2.08%

  • RYCEF

    0.0800

    14.88

    +0.54%

'Vibe hacking' puts chatbots to work for cybercriminals
'Vibe hacking' puts chatbots to work for cybercriminals / Photo: © AFP/File

'Vibe hacking' puts chatbots to work for cybercriminals

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

Text size:

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

- Dodging safeguards -

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.

O.Yip--ThChM