Recent scam involving fake bank apps in Hyderabad exposed a systemic failure in digital security design, where systems assume perfect human behaviour
By Prof Rajiva Ranjan, Dr Siddhartha Bhattacharya, Dr Sukhamaya Swain
Recently, several residents of Hyderabad lost more than Rs 11 lakh in a cyber fraud that followed a familiar pattern. The victims received WhatsApp messages that appeared to be from their banks, urging them to install an app to resolve an “urgent issue.” One of the victims, a small business owner, downloaded what looked like a legitimate banking application. Within hours, money was withdrawn from the account using credentials and one-time passwords that the system treated as valid.
The immediate reaction was predictable and has been heard many times before: people should have been more careful and verified the message with the authorities. But that explanation misses the real point.
These incidents are not happening because people are careless or unaware. They are happening because our digital systems are built on an unrealistic assumption: that ordinary people will behave perfectly in high-pressure digital environments and exercise constant caution every single time.
Myth of ‘The Careless User’
Modern cyberattacks usually do not rely on breaking technology. They rely on persuading people through urgent but convincing messages, or sometimes through what appear to be routine requests. Attackers understand how people actually work under time pressure, distraction and now under information overload.
Across offices, hospitals, colleges, banks and government departments, people deal daily with documents, payments, notifications, service alerts and system updates; most of them genuine. Cyber criminals understand and exploit this reality. That is why today’s phishing messages do not look suspicious; rather, they look routine.
People respond quickly, not because they are careless, but because that is how modern work functions. Expecting every individual to pause, verify, consult (if required) and double-check each digital interaction is not only unrealistic; it is contrary to the expected outcomes of our digital systems that are shaping our lives.
Training Alone Will Never Work
Cybersecurity discussions still place enormous faith in training and awareness programmes. Posters, advisories and public campaigns repeatedly warn people not to click on unknown or suspicious links. Training is important, but it has clear limits! No amount of instruction can eliminate fatigue, distraction, artificially created urgency, or emotional pressure. Even highly trained professionals make mistakes when rushed, distracted or stressed. Expecting flawless behaviour from everyone, every day and in every situation, is simply not how human beings function.
High-risk fields such as aviation, healthcare and road safety have long understood this. Aviation assumes pilot error and builds multiple layers of protection through checklists, redundant systems and fail-safes. Healthcare systems build protocols assuming clinicians might be fatigued or stressed. Road safety goes beyond just driver actions; it emphasises safer cars and more forgiving road designs.
Cybersecurity, however, still expects ideal behaviour. It should not.
Blind Spots
The problem is not that people make mistakes; the problem is that digital systems are unable to cope when they do. In most organisations, once valid credentials such as a username and password or one-time code are entered, the system assumes the access is legitimate and proceeds. It does not sufficiently question whether the behaviour that follows matches the real user. This creates a dangerous blind spot.
In the Hyderabad bank app scam, the digital systems did not fail at the moment of deception. They failed because they continued to trust actions that should have appeared unusual, such as rapid transfers within a short time, unfamiliar access patterns and behaviour inconsistent with the account holder’s normal activity. The damage did not occur because of the click. It occurred because the system failed to recognise abnormal behaviour and respond in time.
Why This Matters
Cybersecurity failures are often discussed as problems faced by large multinational corporations. In reality, regional and public institutions are often more vulnerable.
Digital systems must assume human error and focus on detecting and limiting damage quickly, rather than relying on the hope that mistakes can always be prevented
Colleges, hospitals, cooperative banks and local bodies frequently operate with limited technical resources and expertise. A single compromised account can lead to significant losses for small investors and institutions alike, eroding public trust in the system.
At a time when governments are pushing more services online, cyber risk has become a civic issue, not merely a technical one. The consequences of failure are borne not by some abstract systems, but by ordinary citizens.
Cost of Blaming Individuals
There is a natural tendency to blame the victim: to say they should have been more careful. This creates a false sense of closure, as if the problem has been addressed. In reality, such blame discourages reporting, increases fear and leaves the underlying structural weakness untouched.
A more realistic approach begins by accepting that mistakes will happen. The focus should shift from trying to prevent all cyberattacks to limiting damage by detecting abnormal behaviour quickly. If a system can recognise that a bank account suddenly shows unusual activity, or that hospital records are being accessed at odd hours, harm can be contained.
This shift from blaming individuals to building resilience is urgently needed. Early intervention, not perfect behaviour, should be the goal. The practical implication is straightforward but profound. Digital systems must be designed on the assumption that users will sometimes be deceived. Security should not collapse the moment a credential is compromised.
This means investing in behavioural analytics, anomaly detection, transaction limits and real-time alerts generation. It means slowing down high-risk actions, adding friction where it matters and empowering systems to question themselves. Most importantly, it means shifting the narrative from “blaming individuals” to “building resilient institutions”.
A Lesson Worth Learning Now
The Hyderabad banking scam is not an isolated incident. It is a warning. More such attacks will happen, possibly on a larger scale and with greater sophistication. Today’s approach often resembles placing more guards around a palace while ignoring threats already inside the walls.
Training and awareness campaigns are necessary, but they are not sufficient. Human beings, being human, will make mistakes. The real question is whether our institutions, be it banks, colleges, hospitals or public bodies, are designed to survive those mistakes.
The practical implication is simple: digital systems must assume error and focus on detecting and limiting damage quickly, rather than relying on the hope that mistakes can always be prevented.

(Prof Rajiva Ranjan is Professor of Practice, Dr Siddhartha Bhattacharya and Dr Sukhamaya Swain are Professors of Finance, JK Business School, Gurugram)
