Skip to main content

Worried about the FBI’s deepfake warning? Follow these expert tips

AI render of an online bad actor
Credit: Bing Image Generator

Last week, the Federal Bureau of Investigation (FBI) issued a public service announcement about the rise in deepfake explicit content and how it is being used for crimes like extortion, blackmail, and harassment. In the simplest of terms, a deepfake is synthetic multimedia material trying to mimic an original. It can be an AI-generation photo, video, or audio clip.

The name “deepfake” comes from the inherent technology used in creating such media — deep learning — which involves training an AI model using original material and then modifying it to generate the desired results. It’s not exactly a new invention, but with generative AI exploding in popularity — and access — deepfake crimes are on the rise.

Recommended Videos

Such is their popularity that even Republican presidential nominee Ron DeSantis’ campaign used deepfake images of rival Donald Trump to malign him. Deepfakes are also one of the reasons that calls for regulating AI is being raised everywhere. According to the FBI, the content for generating deepfakes is generally lifted from social media posts and video call clips before being modified into sexually explicit materials for extortion and bullying.

What the experts say about deepfakes

AI render of a person photographic a smartphone user
Credit: Bing Image Generator

So, what’s the solution? Unfortunately, there is no solution. At least, not without making a compromise — one that fundamentally turns the whole meaning of “social media” on its head.

“Unfortunately, the only way to be sure that none of your photos or videos are used to create deep fakes is to stop posting any pictures of yourself online, but that would take a lot of fun from internet users,” says Adrianus Warmenhoven, a cybersecurity advisor at Nord.

“As new and better security practices emerge, malicious actors will find a way to cause harm. It’s a game of catch-up and feels disappointing at times,” says Parteek Saran, an ex-Googler and creator of a next-gen password management tool called Uno. He further suggests that one should try and adopt a “zero-trust philosophy” when it comes to posting content on social media and stresses effective communication with acquaintances to steer clear of deepfake scams.

“While AI can aid in addressing digital safety concerns, there is no fool-proof safety net,” says Yaron Litwin, Digital Safety Expert & CMO at Canopy. The company’s eponymous app is targeted at keeping children safe from online sexual crimes and offers a diverse set of parental controls, as well. Litwin adds that you should avoid posting intimate or compromising images while also reducing the frequency of posting even normal images.

How you can keep yourself safe

AI render of a bad actor spying on another person
Credit: Bing Image Generator

Deepfakes are terrifying, and it’s hard to fathom the trauma they can inflict on a person and their family members. But there are a few ways in which users can avoid falling into the trap, or at least steer clear to a large extent.

To understand what steps average smartphone users with typical digital skills can take, I reached out to Andrew Gardner, Vice President of Research and Innovation at Gen Digital, a software company that offers trusted safety tools like Norton, Avast, and Avira, among others.

Gardner says safety protocols begin at the fundamental level. Users should start by turning their profiles private — or at least change the post visibility settings so that only the people they mutually follow can see and interact with their posts. Parental controls, which are now available for almost every major social media platform, should be enabled diligently so that guardians can keep an eye on any suspicious interactions.

It is general counsel that one should only accept requests from people they know, but taking an extra step can go a long way. “If you are networking, check invitees’ recent posts and activities to gauge how real they are,” says Gardner, adding that one should be wary of “accounts with few friends or mutual friends.” Another crucial piece of advice that the Gen Digital executive has to give is checking and restricting social media logins.

AI render of a masked person spying on another person
Credit: Bing Image Generator

Users often visit online services, and to avoid the hassle of creating an account, they opt for the social media sign-in option. “This gives apps access to personal information, and in some situations, those apps sell that information to third parties,” he says. The Cambridge Analytica scandal involving Facebook is a great example. One should periodically check which apps are connected to their social media accounts, and unless necessary, their access should be revoked.

Deepfakes are sophisticated AI-powered crimes, but Gardner suggests users should still adhere to some basic safety guidelines — such as enabling two-factor authentication, using strong passwords, enabling biometric passkeys, and avoiding any suspicious or unknown links.

Social media done safely and responsibly

AI render of people clicking selfies.
Credit: Bing Image Generator

Canopy’s Litwin is of the opinion that the more you share, the easier it becomes to create convincing deepfakes. There is no dearth of shady AI image generation models without any restrictions on creating explicit material. These tools rely on image inputs to train the model. The more training data it is fed, the more accurate and realistic the deepfakes become.

It’s the standard tactic implemented by mainstream AI image generators like MidJourney. If your social feed is open and has an ample amount of photos and videos, there’s no stopping a bad actor from scraping them to create compromising deepfakes. But if you are among the folks that see value in social media as a place to preserve your most cherished memories, there are a few measures you must take.

Using the Twitter app on the Xiaomi 13 Pro.
Andy Boxall/Digital Trends

“Be cautious of what personal information you share online, adjust privacy settings on your accounts, enable two-factor authentication, and carefully review images for any imperfections,” notes Boyd Clewis, a cybersecurity expert on the Forbes Security Council and author of “Through The Firewall: The Alchemy Of Turning Crisis Into Opportunity.”

But how to reliably spot a social media profile that likely has a bad actor behind it engaged in shady acts like creating and disseminating deepfakes? “If video or audio material is shared from suspicious profiles, and the accounts do not contain any personal information or photos, it is likely that the profile is fake,” suggests Tomas Samulis, an information security architect at Baltic Amadeus. He suggests that such profiles lacking personally identifiable information are specifically created to spread fakes and other controversial information.”

What to do if you get deepfaked

AI render of a masked person using a phone
Credit: Bing Image Generator

But there’s only so much precaution that an average smartphone user can take. Even the most digital-savvy users find themselves at the receiving end of cybercrimes, even after using all the standard tools like two-factor authentication, private profiles, and biometric firewalls.

If, despite taking all the precautions, you still find yourself at the center of a deepfake crime, seek expert advice and help from authorities instead of taking matters into your own hands. Experts note that sitting on such harassment or trying to discreetly handle it on their own often worsens the situation for victims.

“If you discover that someone is misusing your content, seek counsel familiar with the copyright law to help you get the content removed as quickly as possible,” says Rob Scott, a member of the Dallas Bar Association and licensed legal expert in areas like cybersecurity risk and data privacy.

Consulting a legal expert is crucial because they can guide you through your digital rights, Canopy’s Litwin also suggests preserving all the evidence, advising victims to “document any evidence of the extortion attempts, such as messages, emails, or any form of communication related to the extortion.”

Another crucial piece of advice is that the victim should immediately cease all contact with a criminal because the criminal can further manipulate or harass them with more serious extortion demands. At the same time, users should contact a cybersecurity expert or dial up one of the government cybercrime helplines to take the right measures in time.

How to spot deepfakes

AI render of a human face
Credit: Bing Image Generator

With AI engines getting increasingly sophisticated, the kind of deepfakes they produce is getting eerily real and hard to spot. However, there are still a few markers that users can pay attention to in order to spot synthetically-altered or AI-generated compromising material.

Following is a compilation of deepfake identifiers that experts have to offer:

  • Look out for unnatural eye movements. If a person’s eyes don’t appear to be blinking, the eye movement is off, or the facial expressions don’t appear to be in sync with the words they are speaking, it is most likely a deepfake clip. A lack of emotions, or incoherent emotions, is a telltale marker that the media has been digitally morphed.
  • “Deepfake technology typically focuses on facial features,” says Gardner. “If the person’s body shape does not seem natural, or if their movements are jerky and disjointed, the video is likely a deepfake.”
  • Another reliable marker is the background, which can appear to be unnaturally blurry or shows odd visual artifacts. Another easy way to spot deepfakes is to look for abnormal discoloration or serious color mismatch, especially with respect to the face and shadow of items around.
  • If you come across a photo in which a person is rocking “perfect hair” and you can’t spot any individual elements like a few strands of flyaway hair or frizziness, stay cautious. AI models are also known to struggle with teeth. Look out for teeth that are either unnaturally perfect, or those that lack outlines for individual teeth, or maybe more teeth than the usual human denture.
  • Misaligned body parts, blurred edges, a few extra or fewer fingers, oddly contorted limbs, out-of-sync movement of body parts, and a voice lacking pauses and emotional breaks are the other signs you should closely watch out for. “Fake voice or audio recordings often have background noise, robotic-sounding voices, and strange pronunciations,” explains Samulis.

A serious problem with half-effective solutions

AI render of a person peeping through phone
Credit: Bing Image Generator

As explained above, there is no fool-proof safety net against deepfakes. But if you are careful about where you share photos, who can see them, and how far your social media access goes, you can stay in a relatively safe zone.

One should also pay attention to who their online friends are, and also vet the activities of new invites before adding them to their friend circle on social media. As far as deepfakes go, that’s a bit tricky. But if you stay vigilant and take time to assess inconsistencies in a suspicious photo or video, deepfakes can be spotted with a fairly high degree of accuracy.

At the end of the day, it’s all about cultivating hygienic online habits and staying vigilant in an increasingly forgery-prone online space.

Nadeem Sarwar
Nadeem is a tech journalist who started reading about cool smartphone tech out of curiosity and soon started writing…
Boston Dynamics gave its Atlas robot an AI brain
The electric atlas from boston dynamics

Boston Dynamics and Toyota Research Institute (TRI) announced on Tuesday that they are partnering to develop general-purpose humanoid robots. Boston Dynamics will contribute its new electric Atlas robot to the task, while TRI will utilize its industry-leading Large Behavior Models.

Boston Dynamics, which launched in 1992 as an offshoot from the Massachusetts Institute of Technology (MIT), has been at the forefront of robotics development for more than 30 years. It burst into the mainstream in 2009 with the BigDog and LittleDog quadrupedal systems and debuted the first iteration of its bipedal Atlas platform in 2013. Atlas' capabilities have undergone a steady evolution in the past decade, enabling the robot to perform increasingly difficult acrobatics and dexterity tasks, from dancing and doing back flips to to conquering parkour courses and navigating simulated construction sites.

Read more
Seven nuclear reactors to power Google’s AI ambitions
Four nuclear power plants.

Google announced on Tuesday that it has signed a deal with nuclear energy startup Kairos Power to purchase 500 megawatts of “new 24/7 carbon-free power" from seven of the company's small modular reactors (SMRs).  The companies are reportedly looking at an initial delivery from the first SMR in 2030 and a full rollout by 2035.

"The grid needs new electricity sources to support AI technologies that are powering major scientific advances, improving services for businesses and customers, and driving national competitiveness and economic growth," Michael Terrell, Google's senior director of Energy and Climate, wrote in a Google Blog on Tuesday. "This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone."

Read more
I watched an AI collar make a dog talk, and it was unreal
A dog wearing the Personifi AI Shazam Band.

All of us talk to our pets, but what if our pets could talk back? That’s the premise of Personifi AI’s Shazam Band, a wearable that puts your pet’s mood, movements, and emotions into words. By using AI, it actually makes a two-sided conversation possible.

If all this sounds crazy, it’s only the beginning of what makes the Shazam Band one of the maddest pieces of tech we’ve seen in a while. And if I hadn't actually seen it working, I doubt I'd believe it was real.
This is Shazam, an AI pet collar

Read more