How underground groups use stolen identities and Deepfakes

These deepfake videos are already being used to cause trouble for public figures. Celebrities, high government officials, acquaintances corporate figures, and others who have lots of high-resolution images and videos online are the easiest to target. we see it social engineering scams using their faces and voices are already proliferating.

Given the tools and deepfake technology available, we can expect to see even more attacks and scams aimed at manipulating victims through voice and video spoofing.

How deepfakes can affect existing attacks, scams, and monetization schemes

Deepfakes can be adapted by criminal actors for today’s malicious activities, and we are already seeing the first wave of such attacks. Below is a list of existing attacks and attacks we can expect in the near future:

• Messenger scams. Impersonating a money manager and calling about a money transfer has been a popular scam for years, and now criminals can use deep fakes in video calls. For example, they could impersonate someone and contact their friends and family to request a money transfer or request a simple top-up on their phone balance.

• BEC. This attack was already quite successful even no deepfakes. Attackers can now use fake videos in calls, impersonate executives or business partners, and request money transfers.

• Make accounts. Criminals can use deepfakes to bypass identity verification services and create accounts at banks and financial institutions, possibly even government services, on behalf of other people, using copies of stolen identity documents. These criminals can use the victim’s identity and bypass the verification process, which is often done via video calls. These accounts can later be used in money laundering and other malicious activities.

• Account hijacking. Criminals can take over accounts that require identification via video calls. They can hijack a financial account and simply withdraw or transfer funds. Some financial institutions require online video verification to have certain features enabled in their online banking applications. Obviously, these verifications could also be a target for deepfake attacks.

• Blackmail. With deepfake videos, malicious actors can create more powerful extortion and other extortion-related attacks. They can even plant fake evidence created using deepfake technologies.

• Disinformation campaigns. Deepfake videos also create more effective disinformation campaigns and could be used to manipulate public opinion. Some attacks, such as bomb and dump schemes, rely on messages from known people. Now these messages can be created using deepfake technology. Undoubtedly, these schemes can have financial, political and even reputational repercussions.

• Technical support scams. Deepfake actors can use fake identities to socially engineer unsuspecting users into sharing payment credentials or earning access to IT assets.

• Social engineering attacks. Malicious actors can use deepfakes to manipulate friends, family or colleagues of an impersonated person. Social engineering attacks, such as those for which Kevin Mitnick it was famous, so it can give a new twist.

• Hijacking of Internet of Things (IoT) devices. Devices that use voice or facial recognition, such as Amazon’s Alexa and many other smartphone brands, will be on the deepfake offender list.

Conclusion and safety recommendations

We are already seeing the first wave of criminal and malicious activity with deepfakes. However, more serious attacks are likely in the future due to the following issues:

There is enough content exposed on social media to create deepfake models for millions of people. People in each particular country, city, town or social group have their social networks exposed to the world.
All the technological pillars are in place. Implementing attacks does not require significant investment and attacks can be launched not only by states and national corporations, but also by individuals and small criminal groups. Actors can already impersonate and steal the identity of politicians, C-level executives and celebrities. This could significantly increase the success rate of certain attacks, such as financial schemes, short-lived disinformation campaigns, manipulation of public opinion, and extortion. The identities of ordinary people are available to be stolen or recreated from publicly exposed media. Cybercriminals can steal from impersonated victims or use their identity for malicious activities. The modification of deepfake models can lead to the mass appearance of identities of people who never existed. These identities can be used in different fraud schemes. Indicators of these occurrences have already been detected in the wild.

What can individuals and organizations do to address and mitigate the impact of deepfake attacks? We have some recommendations for regular users as well as organizations that use biometric patterns for validation and authentication. Some of these validation methods could also be automated and deployed more generally.

A multi-factor authentication approach should be standard for any sensitive or critical account authentication.

Organizations should authenticate a user against three basic factors: something the user has, something the user knows, and something the user is. Make sure the “something” items are chosen wisely.
Staff awareness training, made with relevant samples, and the Know Your Customer (KYC) principle is necessary for financial organizations. Deepfake technology is not perfect and there are certain red flags that an organization’s staff should look for.
Social media users should minimize the exposure of high-quality personal images.
For verification of sensitive accounts (for example, banking or corporate profiles), users should prioritize the use of biometric patterns that are less exposed to the public, such as irises and fingerprints.
Significant policy changes are required to address the problem on a larger scale. These policies should address the use of current and previously disclosed biometric data. They must also consider the state of cybercriminal activities now and prepare for the future.

The security implications of deepfake technology and the attacks that use it are real and damaging. As we’ve shown, not only organizations and C-level executives are potential victims of these attacks, but also ordinary individuals. Given the wide availability of the necessary tools and services, these techniques are accessible to less technically sophisticated attackers and groups, meaning that malicious actions could be executed at scale.

[ad_2]

Source link

You May Also Like

About the Author: Ted Simmons

I follow and report the current news trends on Google news.

Leave a Reply

Your email address will not be published. Required fields are marked *