YouTube is next on the AI-generated malware scam list


Protect Your Access to the Internet

YouTube is the latest frontier where AI-generated content is being used to dupe users into downloading malware that can steal their personal information.

As AI generation becomes increasingly popular on several platforms, so does the desire to profit from it in malicious ways. The research firm CloudSEK has observed a 200% to 300% increase in the number of videos on YouTube that include links to popular malware sources such as Vidar, RedLine, and Raccoon directly in the descriptions since November 2022.

The videos are set up as tutorials for downloading cracked versions of software that typically require a paid license for use, such as Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, among others.

Bad actors benefit by creating AI-generated videos on platforms such as Synthesia and D-ID. They create videos that feature humans with universally familiar and trustworthy features. This popular trend has been used on social media and has long been used in recruitment, educational, and promotional material, CloudSEK noted.

‍The combination of the previously mentioned methods makes it so users can easily be tricked into clicking malicious links and downloading the malware infostealer. When installed, it has access to the user’s private data, including “passwords, credit card information, bank account numbers, and other confidential data,” which can then be uploaded to the bad actor’s Command and Control server.

Other private info that might be at risk to infostealer malware includes browser data, Crypto wallet data, Telegram data, program files such as .txt, and System information such as IP addresses.

‍While there are many antiviruses and endpoint detection systems on top of this new brand of AI-generated malware, there are also many information stealer developers around to ensure the ecosystem remains alive and well. Though CloudSEK noted that the bad actors sprung up alongside the AI revolution in November 2022, some of the first media attention of hackers using ChatGPT code to create malware didn’t surface until early February.

Information stealer developers also recruit and collaborate with traffers, other actors who can find and share information on potential…