Trend Micro Empowers Organizations to Tackle Malicious AI

Trend Micro Incorporated has shared more details of its ongoing commitment to protect global customers around the world from emerging AI threats.

Vijendra Katiyar, Country Manager for India & SAARC: “AI tools like ChatGPT are taking the world by storm, but the technology is already being used by opportunistic threat actors to take advantage of gaps in enterprise security. Trend is leading the way globally in mitigating these threats through a prolific output of groundbreaking research and its own use of AI to supercharge both ASRM and XDR.”

Trend successfully blocked 73 billion threats for its global customer base in the first half of 2023, marking a 16% year-on-year increase. Those figures illustrate both the growing power of its threat detection capabilities and the sheer scale of today’s threat landscape.

Emerging malicious AI tools including WormGPT and FraudGPT are already being built on top of open-source generative AI platforms to democratize cybercrime, making hackers more productive and attacks more likely to succeed.

Trend Research recently revealed how threat actors are strengthening impersonation tactics by combining deepfake and AI voice cloning technology with generative AI for more effective virtual “virtual kidnapping” scams. Adversaries leverage ChatGPT to filter and fuse large datasets to victim selection, and deepfakes are deployed to deceive victims into believing a close relation has been kidnapped to extort a ransom.

Separate research from Trend uncovered the use of generative AI in training and supporting new threat actors, including activities such as:

  • Developing malicious polymorphic code
  • Creating detection-resistant malware
  • Creating highly convincing phishing emails for business email compromise (BEC) and webpages in multiple languages
  • Creating hacking tools
  • Identifying and analyze vulnerabilities
  • Identifying card data for fraud
  • Accelerating tactic and technique learning

These tools are continually improved by cybercriminals and made accessible through subscription-based pricing to further reduce barriers to entry for aspiring hackers. The development and deployment of malicious AI has put escalating pressure on security teams to detect and respond to threats earlier and faster to ensure quicker containment and minimal damage.

Empowering security teams to detect and respond to malicious AI use, Trend Vision One™ leverages its own generative AI through the Companion virtual assistant, in addition to AI app detection models, to help SOC analysts match the speed and polymorphic nature of AI-driven attacks. It features:

  • XDR Incident Feature: Accelerating threat event understanding, saving time spent researching and contextualizing alerts. On average it saves three minutes per alert, amounting to several hours per user per week.
  • Command-Line Feature: Streamlining and simplifying decoding complex scripts. Analysts save additional time with productivity gains of up to 40 minutes of manual investigation time.
  • Search Query Generator: Transforming plain-language search queries into formal search syntax, saving up to one hour of time spent hunting by assisting users with query development and field name, operator and value identification.

Trend has been at the forefront of AI-powered solutions with current and planned AI/ML and generative AI investments and historical leadership in the domain since 2005, including tooling designed to detect BEC attacks. Its Writing Style DNA technology learns “normal” email writing style from previous messages and flags when emails deviate from this baseline. It blocked over 130,000 BEC attacks for customers in this manner throughout 2022.

Related posts

TVS Electronics Unveils ‘One Box Solution’ for Retail

Ericsson ConsumerLab: Rising use of Generative AI Apps boosts consumer interest in differentiated connectivity

Johnson Controls expands AI features in OpenBlue digital ecosystem

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More