Cyber Talk: Growing concern around deepfakes
This tech can be used to spread misinformation, propaganda, blackmail people
Published Date - 12:45 AM, Tue - 31 January 23
Deepfakes are a type of AI-based technology that allows for the creation of highly realistic digital forgeries, in which an individual’s face and/or voice is replaced with someone else’s.
This technology can be used to create convincing videos of real people doing or saying things they never did. Deepfakes are a growing concern because they can be used to spread misinformation and propaganda, and harass or blackmail individuals.
Social media, govt’s role
Social media users could use a face-generating AI system to hide their identity in other people’s photos. Few social media sites, such as Facebook and Instagram, let users decide whether they are tagged in photos. Other areas of focus should be on (a) commercialising fact-checking as a service (b) creating a regulatory (c) creating structured fact-checking data through Claim Review and (d) regionally collaborating.
Social engineering crimes using Deepfakes
Deepfakes can be used as a tool in social engineering crimes, in which attackers use manipulation and deception to trick individuals into divulging sensitive information or performing certain actions.
–One example of how deepfakes can be used in social engineering is by creating a video of a high-level executive or official, such as a CEO or government official, asking for sensitive information or requesting a transfer of funds. This video can then be sent to an employee or individual, who may not be able to tell that the video is fake, and may comply with the request.
–Another example is creating a fake video of a friend or family member asking for money or other financial help.
–Deepfakes can also be used in impersonation attacks, in which an attacker creates a fake video or audio of an individual, such as a celebrity or public figure, and uses it to impersonate that person and gain access to sensitive information or resources.
Fact-checking deepfakes
There are several steps that can be taken to help verify the authenticity of a video:
–Check the source: Look for information about the video’s origin and try to verify that the source is reputable and credible.
–Check other sources: Look for other sources that report on the same video or story, and compare the information to see if it matches up.
–Look for inconsistencies: Check for any inconsistencies or errors in the video, such as unnatural movements or expressions, mismatched audio or inconsistencies in the lighting or shadows.
–Use specialised tools: There are tools available that can analyse videos and detect signs of manipulation, such as the ‘Deepfake Detection Model’ which is a specialised software that can detect deepfakes.
–Verify with the person: If the video features a specific person, try to reach out to that person and verify the authenticity of the video.
–Log on to factly.in/ or www.boomlive.in/ or www.altnews.in/ or any other official International Fact Checking Network Members to validate deepfakes.
Spotting Deep Fakes
–Look for unnatural movements or expressions: Deepfakes often struggle to perfectly replicate the subtle movements and expressions of real people. Look for unnatural or exaggerated movements in the video.
–Check the audio: Deepfakes may not perfectly match the audio with the video, so listen for any inconsistencies or errors in the audio.
–Check the lighting and shadows: Deepfakes may not accurately replicate the lighting and shadows in a scene; look for inconsistencies or errors in these elements.
–Check the background: In a deepfake video, the background may be out of focus, or not match the movement of the subject.
–Pay attention and check if the person is blinking enough or too much.
Conclusion
Deepfake technology is constantly evolving and becoming more difficult to spot deepfakes. Therefore, it is essential to remain sceptical and verify the authenticity of any video before accepting it as true.
