Business
Hacking has an evil twin! What is vibe hacking? Here’s how cyber frauds are misusing AI – The Times of India
As if cyber frauds were not enough, you will now have to deal with another evil of the AI era, vibe hacking!Cybersecurity experts are warning that AI is increasingly being misused by criminals to launch sophisticated cyberattacks. What started as “vibe coding,” a way to harness AI for productive tasks, now has a darker side: “vibe hacking.”AI developer Anthropic reported that its coding model, Claude Code, was recently exploited to steal personal data from 17 organisations, with hackers demanding nearly $500,000 from each victim, according to an ET report.Dark web forums now offer ready-made AI tools, called “Evil LLMs,” for as little as $100. Examples include FraudGPT and WormGPT, designed specifically for cybercrime. These tools can bypass safety measures and trick AI into leaking sensitive information or producing harmful content.A new AI agent called PromptLock can generate code on demand and decide which files to copy, encrypt, or access, raising the stakes even further.“Generative AI has lowered the barrier of entry for cybercriminals,” Huzefa Motiwala, senior director at Palo Alto Networks told ET. “We’ve seen how easily attackers can use mainstream AI services to generate convincing phishing emails, write malicious code, or obfuscate malware.”In simulations, Palo Alto Networks’ Unit 42 team demonstrated that AI could carry out a full ransomware attack in just 25 minutes, which is a whopping 100 times faster than traditional methods. Prompt injection, where carefully crafted inputs hijack a model’s goals, allows attackers to override security rules or expose sensitive data.Motiwala explained, “Attacks don’t only come from direct user prompts, but also from poisoned data in retrieval systems or even embedded instructions inside documents and images that models later process.”Research by Unit 42 found that certain prompt attacks succeed against commercial models 88% of the time.“AI has become a cybercrime enabler, and the Claude Code incident marks a turning point,” said Sundareshwar Krishnamurthy, partner at PwC India. “Cybercriminals are actively misusing off-the-shelf AI tools, essentially chatbots modelled on generative AI systems but stripped of safety guardrails and sold on dark web forums,” ET further quoted Krishnamurthy.Authorities in Gujarat have also cautioned that AI kits are being sold via encrypted messaging apps.“These tools automate everything from crafting highly convincing phishing emails to writing polymorphic malware and orchestrating social-engineering campaigns at scale,” said Tarun Wig, CEO of Innefu Labs. “Attackers can generate deepfake audio or video, customise ransomware, and even fine-tune exploits against specific targets.”Autonomous AI agents make the threat worse by remembering tasks, reasoning independently, and acting without direct human input.Vrajesh Bhavsar, CEO of Operant AI, pointed to risks from open-source Model Context Protocol (MCP) servers. “We’re seeing vectors like tool poisoning and context poisoning, where malicious code embedded in open repositories can compromise sensitive API keys or data,” he said. “Even zero-click attacks are rising, where malicious prompts are baked into shared files.”Experts say AI developers, including OpenAI, Anthropic, Meta, and Google, must do more to prevent misuse.“They must implement stronger safeguards, continuous monitoring, and rigorous red teaming,” said Wig. “Much like pharmaceuticals undergo safety trials, AI models need structured safety assessments before wide release.”