OpenAI and Microsoft Remove State-backed Hacker Groups From Their Apps

OpenAI and Microsoft Remove State-backed Hacker Groups From Their Apps

OpenAI and Microsoft found state-backed hacker groups from Russia, Iran, North Korea, and China using their AI tools. The accounts linked with the hackers were swiftly removed upon discovery.

The incident came to light on February 14 when OpenAI tweeted on X and said that they collaborated with Microsoft Threat Intelligence Center and blocked 5 state-backed hacker groups from using their platform.

The AI firm believes that the hackers used its chatbots to carry out translations, run codes to support their activities, and also to find errors in their codes.

In short, their presence on the platform did not directly affect the safety of OpenAI customers. These hackers just wanted to use the tool.

How Did These Groups Use the AI Chatbot?

Microsoft has released a separate report where it has highlighted all the ways in which these hackers used their AI tools and what projects they were associated with. Here’s the list:

  1. Emerald Sleet, a North Korea-based hacking group, used ChatGPT to develop spear-phishing content and research on North Korea, device vulnerabilities, and troubleshooting techniques.
  2. Charcoal Typhoon, a China-based hacking group, used ChatGPT to develop, script, and understand cybersecurity tools and to generate social engineering content.
  3. Salmon Typhoon, another China-based hacking group, leveraged ChatGPT mostly for researching sensitive information and digging out details about high-profile personalities. They also used it to improve their data-collecting tools and research better ways to source private information.
  4. Forest Blizzard, a Russia-based hacking group, used ChatGPT to optimize its cyber tools and research into satellites and radars associated with the military.
  5. Crimson Sandstorm, an Iran-based hacking group, turned to ChatGPT to come up with better attack techniques, create social engineering content, and help with troubleshooting.

It’s important to note that none of the groups used the tool to actually develop malware. In that case, they could have been nabbed a lot earlier. They only used it for lower-level tasks such as research, error correction, and brainstorming ideas.

Hacker groups using AI tools to execute their malicious plans isn’t all that surprising. Several cybersecurity firms have already reported that hackers are now using AI to speed up their work.

Another report in January by the United Kingdom’s National Cyber Security Centre (NCSC) predicted that by next year, January 2025, hackers and APTs (Advanced persistent threats) will greatly benefit from AI. OpenAI itself seems to be well aware of this grave possibility.

We build AI tools that improve lives and help solve complex challenges, but we know that malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations.OpenAI report

State-backed groups that obviously have more resources are a much bigger threat to this new digital ecosystem.

While no long-term plan has been discussed to tackle this issue yet, OpenAI promises to continue monitoring its platform to identify and disrupt state-backed hackers. They are also planning to leverage inside information from the industry along with a dedicated team to identify suspicious patterns so that no hacker group can escape their radar.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *