North Korean hackers use AI deepfakes in spear-phishing attack on South Korea
North Korea’s Kimsuky hackers used AI-generated deepfake IDs in a spear-phishing attack on a South Korean defence body, raising concerns over AI misuse as cyber threats escalate at national security levels.
Published Date - 15 September 2025, 09:49 AM
Seoul: A North Korea-linked hacking group has carried out a cyberattack on South Korean organisations, including a defence-related institution, using artificial intelligence (AI)-generated deepfake images, a report showed on Monday.
Kimsuky group, a hacking unit believed to be sponsored by the North Korean government, attempted a spear-phishing attack on a military-related organisation in July, according to the report by the Genians Security Center (GSC), a South Korean security institute, Yonhap news agency reported.
Spear-phishing is a targeted cyberattack, often conducted through personalised emails that impersonate trusted sources.
The report said the attackers sent an email attached with malicious code, disguised as correspondence about ID issuance for military-affiliated officials. The ID card image used in the attempt was presumed to have been produced by a generative AI model, marking a case of the Kimsuky group applying deepfake technology.
Typically, AI platforms such as ChatGPT reject requests to generate copies of military IDs, since government-issued identification documents are legally protected.
However, the GSC report noted that the hackers appear to have bypassed restrictions by requesting mock-ups or sample designs for “legitimate” purposes, rather than direct reproductions of actual IDs.
The findings follow a separate report published in August by US-based Anthropic, developer of the AI service Claude, which detailed how North Korean IT workers misused AI.
That report said the workers generated manipulated virtual identities to undergo technical assessments during job applications, part of a broader scheme to circumvent international sanctions and secure foreign currency for the regime.
GSC said such cases highlight North Korea’s growing attempts to exploit AI services for increasingly sophisticated malicious activities.
“While AI services are powerful tools for enhancing productivity, they also represent potential risks when misused as cyber threats at the level of national security,” it said.
“Therefore, organisations must proactively prepare for the possibility of AI misuse and maintain continuous security monitoring across recruitment, operations, and business processes.”