The China Mail - 'Vibe hacking' puts chatbots to work for cybercriminals

USD -
AED 3.672502
AFN 63.000221
ALL 82.696296
AMD 376.858962
ANG 1.790083
AOA 917.000048
ARS 1391.743998
AUD 1.455943
AWG 1.8025
AZN 1.68207
BAM 1.686609
BBD 2.014599
BDT 123.041898
BGN 1.709309
BHD 0.377522
BIF 2972.081492
BMD 1
BND 1.28326
BOB 6.911836
BRL 5.160703
BSD 1.000289
BTN 92.840973
BWP 13.603929
BYN 2.974652
BYR 19600
BZD 2.011667
CAD 1.39211
CDF 2294.999663
CHF 0.799825
CLF 0.023121
CLP 912.959749
CNY 6.872026
CNH 6.90029
COP 3672.91
CRC 465.054111
CUC 1
CUP 26.5
CVE 95.090054
CZK 21.290498
DJF 178.120405
DKK 6.484145
DOP 60.181951
DZD 133.075058
EGP 54.330603
ERN 15
ETB 156.185056
EUR 0.867699
FJD 2.253803
FKP 0.750158
GBP 0.757655
GEL 2.689431
GGP 0.750158
GHS 11.003842
GIP 0.750158
GMD 73.500523
GNF 8772.625751
GTQ 7.652738
GYD 209.355772
HKD 7.8372
HNL 26.571696
HRK 6.536904
HTG 131.299369
HUF 333.327498
IDR 17001
ILS 3.146465
IMP 0.750158
INR 92.8756
IQD 1310.292196
IRR 1318875.000049
ISK 125.303045
JEP 0.750158
JMD 158.20086
JOD 0.70899
JPY 159.704498
KES 130.10094
KGS 87.450066
KHR 4002.104101
KMF 426.749785
KPW 899.994443
KRW 1515.719751
KWD 0.30931
KYD 0.833603
KZT 475.533883
LAK 22044.107185
LBP 89572.937012
LKR 315.333805
LRD 183.557048
LSL 16.799852
LTL 2.95274
LVL 0.60489
LYD 6.380291
MAD 9.344475
MDL 17.619744
MGA 4232.256729
MKD 53.487373
MMK 2099.621061
MNT 3572.314592
MOP 8.076125
MRU 39.906696
MUR 46.949982
MVR 15.449836
MWK 1734.466419
MXN 17.93787
MYR 4.039032
MZN 63.96016
NAD 16.799852
NGN 1381.897825
NIO 36.813625
NOK 9.751825
NPR 148.537059
NZD 1.75148
OMR 0.38449
PAB 1.000341
PEN 3.480496
PGK 4.326343
PHP 60.641499
PKR 279.096549
PLN 3.721525
PYG 6496.591747
QAR 3.647426
RON 4.423599
RSD 101.875991
RUB 80.378485
RWF 1463.871032
SAR 3.754213
SBD 8.009975
SCR 13.604279
SDG 600.999802
SEK 9.507225
SGD 1.287435
SHP 0.750259
SLE 24.595114
SLL 20969.510825
SOS 571.6306
SRD 37.364016
STD 20697.981008
STN 21.127246
SVC 8.752528
SYP 110.548921
SZL 16.793643
THB 32.748017
TJS 9.565577
TMT 3.5
TND 2.936568
TOP 2.40776
TRY 44.49955
TTD 6.789059
TWD 31.982025
TZS 2597.496688
UAH 43.772124
UGX 3726.268859
UYU 40.661099
UZS 12151.342029
VES 473.325198
VND 26334.5
VUV 120.132513
WST 2.770875
XAF 565.643526
XAG 0.014063
XAU 0.000217
XCD 2.70255
XCG 1.802676
XDR 0.703479
XOF 565.643526
XPF 102.845809
YER 238.625035
ZAR 16.987399
ZMK 9001.200113
ZMW 19.279373
ZWL 321.999592
  • RBGPF

    -13.5000

    69

    -19.57%

  • CMSC

    0.0900

    21.99

    +0.41%

  • CMSD

    0.0500

    22.15

    +0.23%

  • NGG

    2.2400

    86.84

    +2.58%

  • BP

    -0.8300

    46.17

    -1.8%

  • RELX

    0.0800

    33.23

    +0.24%

  • GSK

    0.8000

    55.99

    +1.43%

  • BTI

    -0.5800

    57.89

    -1%

  • BCC

    -0.7700

    75.08

    -1.03%

  • BCE

    0.1400

    25.38

    +0.55%

  • RIO

    1.5200

    94.81

    +1.6%

  • JRI

    0.2200

    12.52

    +1.76%

  • RYCEF

    0.5500

    15.64

    +3.52%

  • VOD

    0.1100

    15.13

    +0.73%

  • AZN

    3.5100

    200.73

    +1.75%

'Vibe hacking' puts chatbots to work for cybercriminals
'Vibe hacking' puts chatbots to work for cybercriminals / Photo: © AFP/File

'Vibe hacking' puts chatbots to work for cybercriminals

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

Text size:

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

- Dodging safeguards -

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.

O.Yip--ThChM