Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme

0
Microsoft Exposes LLMjacking Cybercriminals

Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme

Four of the people Microsoft claimed were responsible for an Azure Abuse Enterprise scam that uses illegal access to generative artificial intelligence (GenAI) capabilities to create harmful and offensive content were unmasked on Thursday.

Several AI products, notably Microsoft’s Azure OpenAI Service, have been the focus of the LLMjacking campaign. The criminal network is being tracked by the tech giant Storm-2139. The people listed are:

  • Arian Yadegarnia aka “Fiz” of Iran,
  • Alan Krysiak aka “Drago” of the United Kingdom,
  • Ricky Yuen aka “cg-dot” of Hong Kong, China, and
  • Phát Phùng Tấn aka “Asakuri” of Vietnam

According to Steven Masada, associate general counsel for Microsoft’s Digital Crimes Unit (DCU), “Members of Storm-2139 unlawfully accessed accounts with certain generative AI services by exploiting exposed customer credentials scraped from public sources.”

“They subsequently modified the features of these services and resold access to other fraudulent individuals, offering thorough guidance on how to create damaging and illegal content, including unlawful intimate images of celebrities and other sexually explicit content.”

According to Redmond, the harmful activity is specifically conducted with the goal of getting beyond generative AI systems’ safety precautions.

The revised lawsuit was filed just over a month after Microsoft said that it would take legal action against the threat actors for systematically stealing API keys from a number of clients, including numerous American businesses, and then selling those keys to other actors.

Additionally, it was granted a court order to take control of a website (“aitism[.]net”) that is thought to have played a significant role in the group’s illegal activities.

There are three main groups of people involved in Storm-2139: the creators who created the illegal tools that allow the misuse of AI services; the providers who alter and sell these tools to clients at different price points, and the end users who use them to produce fake content that is against Microsoft’s Acceptable Use Policy and Code of Conduct.

According to Microsoft, it has also discovered two other perpetrators situated in the US states of Florida and Illinois. To prevent interfering with possible criminal investigations, their identities have been kept secret.

The other unnamed co-conspirators, providers, and end users are listed below:

  • A John Doe (DOE 2) who likely resides in the United States,
  • A John Doe (DOE 3) who likely resides in Austria and uses the alias “Sekrit.”
  • A person who likely resides in the United States and uses the alias “Pepsi.”
  • A person who likely resides in the United States and uses the alias “Pebble.”
  • A person who likely resides in the United Kingdom and uses the alias “dazz.”
  • A person who likely resides in the United States and uses the alias “Jorge.”
  • A person who likely resides in Turkey and uses the alias “jawajawaable.”
  • A person who likely resides in Russia and uses the alias “1phlgm.”
  • A John Doe (DOE 8) who likely resides in Argentina.
  • A John Doe (DOE 9) who likely resides in Paraguay.
  • A John Doe (DOE 10) who likely resides in Denmark.

“Going after malicious actors requires persistence and ongoing vigilance,” Masada stated. “By unmasking these individuals and shining a light on their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse.”

About The Author:

Yogesh Naager is a content marketer who specializes in the cybersecurity and B2B space.   Besides writing for the News4Hackers blogs, he also writes for brands including Craw Security, Bytecode Security, and NASSCOM.

READ MORE HERE

‘Auto-Color,’ A New Linux Malware that Allows Hackers Complete Remote Access to Compromised Systems

Android’s New Feature Prevents Fraudsters from Sideloading Apps While on the Phone

 

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish
Open chat
Hello
Can we help you?