Skip to main content

Hackers are using AI to spread dangerous malware on YouTube

YouTube is the latest frontier where AI-generated content is being used to dupe users into downloading malware that can steal their personal information.

As AI generation becomes increasingly popular on several platforms, so does the desire to profit from it in malicious ways. The research firm CloudSEK has observed a 200% to 300% increase in the number of videos on YouTube that include links to popular malware sources such as Vidar, RedLine, and Raccoon directly in the descriptions since November 2022.

Recommended Videos

The videos are set up as tutorials for downloading cracked versions of software that typically require a paid license for use, such as Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, among others.

Bad actors benefit by creating AI-generated videos on platforms such as Synthesia and D-ID. They create videos that feature humans with universally familiar and trustworthy features. This popular trend has been used on social media and has long been used in recruitment, educational, and promotional material, CloudSEK noted.

‍The combination of the previously mentioned methods makes it so users can easily be tricked into clicking malicious links and downloading the malware infostealer. When installed, it has access to the user’s private data, including “passwords, credit card information, bank account numbers, and other confidential data,” which can then be uploaded to the bad actor’s Command and Control server.

Other private info that might be at risk to infostealer malware includes browser data, Crypto wallet data, Telegram data, program files such as .txt, and System information such as IP addresses.

‍While there are many antiviruses and endpoint detection systems on top of this new brand of AI-generated malware, there are also many information stealer developers around to ensure the ecosystem remains alive and well. Though CloudSEK noted that the bad actors sprung up alongside the AI revolution in November 2022, some of the first media attention of hackers using ChatGPT code to create malware didn’t surface until early February.

Information stealer developers also recruit and collaborate with traffers, other actors who can find and share information on potential victims through underground marketplaces, forums, and Telegram channels. Traffers are typically the ones that provide the fake websites, phishing emails, YouTube tutorials, or social media posts on which information stealer developers can attach their malware. There has also been a similar scam with bad actors hosting fake ads on social media and websites for the paid version of ChatGPT.

However, on YouTube, they are taking over accounts and uploading several videos at once to get the attention of the original creator’s followers. Bad actors will take over both popular accounts and infrequently updated accounts for different purposes.

Taking over an account with over 100,000 subscribers and uploading between five and six malware-laced videos is bound to get some clicks before the owner gains control of their account again. Viewers might identify the video as nefarious and report it to YouTube, which will ultimately remove it. A less popular account might have infected videos live and the owner might not be aware for some time.

Adding fake comments and shortened bit.ly and cutt.ly links to videos also makes them appear more valid.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
Availability

Read more
What is a DAN prompt for ChatGPT?
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

The DAN prompt is a method to jailbreak the ChatGPT chatbot. It stands for Do Anything Now, and it tries to convince ChatGPT to ignore some of the safeguarding protocols that developer OpenAI put in place to prevent it from being racist, homophobic, otherwise offensive, and potentially harmful. The results are mixed, but when it does work, DAN mode can work quite well.

What is the DAN prompt?
DAN stands for Do Anything Now. It's a type of prompt that tries to get ChatGPT to do things it shouldn't, like swear, speak negatively about someone, or even program malware. The actual prompt text varies, but it typically involves asking ChatGPT to respond in two ways, one as it would normally, with a label as "ChatGPT," "Classic," or something similar, and then a second response in "Developer Mode," or "Boss" mode. That second mode will have fewer restrictions than the first mode, allowing ChatGPT to (in theory) respond without the usual safeguards controlling what it can and can't say.

Read more