The China Mail - 'Vibe hacking' puts chatbots to work for cybercriminals

USD -
AED 3.672498
AFN 68.146381
ALL 82.605547
AMD 382.141183
ANG 1.790403
AOA 916.999786
ARS 1432.597431
AUD 1.50546
AWG 1.8
AZN 1.741949
BAM 1.666425
BBD 2.013633
BDT 121.671708
BGN 1.666425
BHD 0.376859
BIF 2983.683381
BMD 1
BND 1.28258
BOB 6.908363
BRL 5.346399
BSD 0.999787
BTN 88.189835
BWP 13.318281
BYN 3.386359
BYR 19600
BZD 2.010736
CAD 1.38432
CDF 2834.999755
CHF 0.796581
CLF 0.024246
CLP 951.160908
CNY 7.124697
CNH 7.125045
COP 3891.449751
CRC 503.642483
CUC 1
CUP 26.5
CVE 93.950496
CZK 20.7323
DJF 178.034337
DKK 6.362205
DOP 63.383462
DZD 129.343501
EGP 48.018372
ERN 15
ETB 143.551399
EUR 0.852255
FJD 2.2387
FKP 0.737679
GBP 0.737735
GEL 2.690232
GGP 0.737679
GHS 12.196992
GIP 0.737679
GMD 71.499521
GNF 8671.239296
GTQ 7.664977
GYD 209.16798
HKD 7.780505
HNL 26.193499
HRK 6.420404
HTG 130.822647
HUF 333.005055
IDR 16407.9
ILS 3.335965
IMP 0.737679
INR 88.2775
IQD 1309.76015
IRR 42075.00012
ISK 122.049637
JEP 0.737679
JMD 160.380011
JOD 0.709008
JPY 147.695023
KES 129.169684
KGS 87.450194
KHR 4007.157159
KMF 419.50195
KPW 900.03427
KRW 1393.030196
KWD 0.30537
KYD 0.833213
KZT 540.612619
LAK 21678.524262
LBP 89530.950454
LKR 301.657223
LRD 177.463469
LSL 17.351681
LTL 2.95274
LVL 0.60489
LYD 5.398543
MAD 9.003451
MDL 16.606314
MGA 4430.622417
MKD 52.434712
MMK 2099.833626
MNT 3596.020755
MOP 8.014485
MRU 39.911388
MUR 45.479826
MVR 15.309883
MWK 1733.566225
MXN 18.41288
MYR 4.205005
MZN 63.909576
NAD 17.351681
NGN 1502.303518
NIO 36.791207
NOK 9.885875
NPR 141.103395
NZD 1.680508
OMR 0.383334
PAB 0.999787
PEN 3.484259
PGK 4.237209
PHP 57.17018
PKR 283.854556
PLN 3.624525
PYG 7144.378648
QAR 3.649725
RON 4.316993
RSD 99.80829
RUB 83.31487
RWF 1448.728326
SAR 3.7516
SBD 8.206879
SCR 14.222298
SDG 601.499639
SEK 9.326545
SGD 1.283335
SHP 0.785843
SLE 23.375017
SLL 20969.503664
SOS 571.379883
SRD 39.374981
STD 20697.981008
STN 20.875048
SVC 8.747923
SYP 13001.951397
SZL 17.33481
THB 31.710216
TJS 9.408001
TMT 3.51
TND 2.910408
TOP 2.342097
TRY 41.341497
TTD 6.797597
TWD 30.299897
TZS 2459.506667
UAH 41.217314
UGX 3513.824394
UYU 40.04601
UZS 12444.936736
VES 158.73035
VND 26385
VUV 118.929522
WST 2.747698
XAF 558.903421
XAG 0.023708
XAU 0.000275
XCD 2.70255
XCG 1.8019
XDR 0.695096
XOF 558.903421
XPF 101.614621
YER 239.549812
ZAR 17.37875
ZMK 9001.203937
ZMW 23.720019
ZWL 321.999592
  • RBGPF

    0.0000

    77.27

    0%

  • CMSD

    0.0100

    24.4

    +0.04%

  • BCE

    -0.1400

    24.16

    -0.58%

  • CMSC

    -0.0200

    24.36

    -0.08%

  • RELX

    0.1700

    46.5

    +0.37%

  • GSK

    -0.6500

    40.83

    -1.59%

  • BTI

    -0.7200

    56.59

    -1.27%

  • SCS

    -0.1900

    16.81

    -1.13%

  • BCC

    -3.3300

    85.68

    -3.89%

  • NGG

    0.5300

    71.6

    +0.74%

  • RIO

    -0.1000

    62.44

    -0.16%

  • JRI

    0.1100

    14.23

    +0.77%

  • VOD

    -0.0100

    11.85

    -0.08%

  • AZN

    -1.5400

    79.56

    -1.94%

  • BP

    -0.5800

    33.89

    -1.71%

  • RYCEF

    0.1800

    15.37

    +1.17%

'Vibe hacking' puts chatbots to work for cybercriminals
'Vibe hacking' puts chatbots to work for cybercriminals / Photo: © AFP/File

'Vibe hacking' puts chatbots to work for cybercriminals

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

Text size:

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

- Dodging safeguards -

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.

O.Yip--ThChM