New AI threat: Prompt Injection attack poses critical risk business data
Cyber security officials in Hyderabad have warned businesses about rising prompt injection attacks that trick AI systems into leaking sensitive data. Experts say many organisations link AI tools to internal systems, increasing the risk of serious breaches without stronger guardrails
Published Date - 1 December 2025, 08:41 PM
Hyderabad: A recently reported critical vulnerability, known as prompt injection, is threatening to become a major risk for modern businesses that depend on Artificial Intelligence (AI) systems, according to reports released by cyber security authorities in Hyderabad.
This attack targets the language models that power AI chatbots, which are central to customer services. It allows cybercriminals to bypass safety rules and manipulate AI systems into revealing confidential internal or customer data.
AI models operate based on the instructions users give them, known as prompts. Cybercriminals are now using these prompts in harmful ways. By inserting cleverly crafted malicious instructions, attackers can manipulate AI systems into revealing information that should remain protected.
In simple terms, it involves tricking the AI with specific words or phrases and confusing it into leaking internal company documents, customer records or system details. This technique, which is spreading fast among cybercriminals, is one of the fastest growing attack methods in the AI-driven sector.
According to cybercrime authorities, many companies integrate AI tools directly with sensitive internal systems, CRM databases, support-ticket dashboards, employee information and financial records.
“Ideally, this data should remain completely inaccessible to end users. However, a single deceptive command from a hacker may be enough for the AI to reveal confidential data, posing a high-risk breach for organisations,” said the Hyderabad Cybercrime Police.
Meanwhile, cybersecurity experts say businesses must urgently deploy prompt guardrails, which are protective layers that prevent AI from obeying harmful instructions. They say security can no longer rely on a single barrier and that companies must adopt a multi-layer defence strategy.
Key safety measures for organisations:
Provide AI models with safety training and strict rules.
Deploy systems to detect and block malicious or manipulative prompts.
Enforce strong controls on data access, APIs and backend integrations.
Conduct frequent security audits and strictly restrict access to sensitive datasets.