ChatGPT-themed scams on the rise

Scammers use diverse tactics to trick users into revealing confidential information, according to research from Palo Alto Networks.
26 July 2023
Getting your Trinity Audio player ready...

The explosive popularity of generative AI programmes in recent months has seen a surge in ChatGPT malware.

Recently-released research from Unit 42, the threat intelligence team of global cybersecurity leader Palo Alto Networks, sheds light on the diverse tactics employed by scammers.

ChatGPT (Generative Pre-Trained Transformer) is a large language model-based chatbot owned and operated by OpenAI, an artificial intelligence research and development company. The chatbot, which is available for free in its basic version, has proved lucrative for opportunistic scammers looking to cash in on its increasing popularity.

Between November 2022 and April 2023, Unit 42 observed a 910% increase in monthly registrations for domains related to ChatGPT. There were over 100 daily detections of ChatGPT-related malicious URLs captured from traffic seen by the company. During this same timeframe, the team observed a striking 18,000% growth of squatting domains from DNS security logs.

Squatting domains’ refer to domains registered or used for the purpose of profiting from the goodwill of a trademark belonging to someone else. In this case, bad actors use ‘openai’ or ‘chatgpt’ as or within the domain name: for example, ‘openai[.]us’ or ‘chatgpt[.]jobs’.

Although most of the squatting domains identified were not hosting anything malicious as of April 2023, they notably are not contrScammers use diverse tactics to trick users into revealing confidential information, according to research from Palo Alto Networksolled by OpenAI or other legitimate companies.

Unit 42’s study looked at several phishing URLS that pretended to be the OpenAI website. The individuals behind these phishing scams typically create fake websites that closely mimic the appearance of the official site, and trick users into downloading malware or sharing sensitive information.

A common technique presents users with a ‘DOWNLOAD’ button which, once clicked, downloads Trojan malware to the device without victims realizing the risk.

An image depicting a hacker using OpenAI to spoof DNS entries.

OpenAI used by hackers. Source: SHutterstock AI.

Another common scam tactic involves the use of ChatGPT-related social engineering for identity theft or financial fraud. Although OpenAI offers a free version of ChatGPT for users, fraudulent websites often claim that users must pay for their services, and try to lure victims into providing sensitive information such as their credit card details and email address.

The use of copycat chatbots also pose significant security risks. Some copycat applications – many of which are based on GPT-3 (released January 2020), which is less powerful than more recent versions – offer their own large language models, and others claim they offer ChatGPT services through OpenAI’s public API. ChatGPT is not accessible in certain regions, and prior to the release of the API there were several open-source projects that enabled users to connect to ChatGPT through various automation tools. Websites created with these automation tools or the OpenAI API could therefore attract a lot of traffic from these regions. This also provided bad actors with the opportunity to monetize ChatGPT by proxying their service.

Using these copycat bots comes with the additional risk of having your input collected and stolen. Any confidential or sensitive information you provide could leave you vulnerable. The bot’s responses could also be purposefully manipulated to provide inaccurate or misleading information.

To use an example from Unit 42’s study, the team downloaded an extension from a squatting domain using the same information and video from the official OpenAI extension. Once downloaded, the fraudulent extension added a background script to the victims’ browser that contained highly obfuscated Javascript. This Javascript calls the Facebook API to steal the victim’s account details, and may enable scammers to get further access to the victim’s accounts.

ChatGPT scams have also started to show up on mobile app stores in the form of fleeceware. These scam apps claim to offer free access to ChatGPT, but eventually start charging weekly or monthly subscription fees which can be difficult to cancel. When advertising these apps developers often use tactics that screen out more scam-conscious and tech-savvy users, such as deliberately misspelling the app name in the title (e.g. ‘ChatGTP’).

As ChatGPT only continues to rise in popularity, we will undoubtably see more scams of the sort detailed in Unit 42’s study and changes in tactics to ensure malware’s continued effectiveness.