Criminals Have Created Their Own ChatGPT Clones

In a concerning turn of events, cybercriminals have reportedly developed their own versions of large language models (LLMs), similar to OpenAI’s ChatGPT, to potentially amplify their criminal activities, such as crafting phishing emails or malware. These illicit chatbots have surfaced on dark-web forums and marketplaces since early July, raising questions about their authenticity and purpose.

These rogue LLMs, known as WormGPT and FraudGPT, claim to lack the safety measures present in legitimate LLMs from companies like Google, Microsoft, and OpenAI. The criminal developers assert that their systems can sidestep ethical barriers and generate text without limitations, enabling various illicit activities.

WormGPT, which allegedly builds on the GPTJ language model, is advertised as useful for phishing schemes, particularly targeting novice cybercriminals. It can create emails that are not only persuasive but strategically cunning, as demonstrated by a simulated business email compromise scam.

FraudGPT’s creator boasts that the chatbot can generate undetectable malware, locate leaks and vulnerabilities, and contribute to online scams. The system was offered for a subscription fee, suggesting that cybercriminals could access its capabilities for malicious purposes.

The authenticity of these criminal chatbots remains uncertain, and the cybercriminal ecosystem is known for scams and deception within its ranks. Experts acknowledge that cybercriminals often defraud each other, casting doubt on the legitimacy of these tools.

While some signs suggest the potential use of WormGPT, doubts persist about FraudGPT’s credibility. The broader concern underscores the ongoing exploitation of generative AI’s hype by scammers, prompting calls for heightened transparency, accountability, and oversight in the deployment of AI technologies.

As the situation unfolds, experts remain vigilant, assessing the risks posed by these illicit chatbots and advocating for safeguards to prevent their misuse for criminal activities.

Posted in

Aihub Team

Leave a Comment





SK Telecom outlines its plans with AI partners

SK Telecom outlines its plans with AI partners

Razer and ClearBot are using AI and robotics to clean the oceans

Razer and ClearBot are using AI and robotics to clean the oceans

NHS receives AI fund to improve healthcare efficiency

NHS receives AI fund to improve healthcare efficiency

National Robotarium pioneers AI and telepresence robotic tech for remote health consultations

National Robotarium pioneers AI and telepresence robotic tech for remote health consultations

IBM’s AI-powered Mayflower ship crosses the Atlantic

IBM’s AI-powered Mayflower ship crosses the Atlantic

Humans are still beating AIs at drone racing

Humans are still beating AIs at drone racing

How artificial intelligence is dividing the world of work

How artificial intelligence is dividing the world of work

Global push to regulate artificial intelligence

Global push to regulate artificial intelligence

Georgia State researchers design artificial vision device for microrobots

Georgia State researchers design artificial vision device for microrobots

European Parliament adopts AI Act position

European Parliament adopts AI Act position

Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA

Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA

AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot

AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot

AI and Human-Computer Interaction: AI technologies for improving user interfaces, natural language interfaces, and gesture recognition.

AI and Data Privacy: Balancing AI advancements with privacy concerns and techniques for privacy-preserving AI.

AI and Virtual Assistants: AI-driven virtual assistants, chatbots, and voice assistants for personalized user interactions.

AI and Business Process Automation: AI-powered automation of repetitive tasks and decision-making in business processes.

AI and Social Media: AI algorithms for content recommendation, sentiment analysis, and social network analysis.

AI for Environmental Monitoring: AI applications in monitoring and protecting the environment, including wildlife tracking and climate modeling.

AI in Cybersecurity: AI systems for threat detection, anomaly detection, and intelligent security analysis.

AI in Gaming: The use of AI techniques in game development, character behavior, and procedural content generation.

AI in Autonomous Vehicles: AI technologies powering self-driving cars and intelligent transportation systems.

AI Ethics: Ethical considerations and guidelines for the responsible development and use of AI systems.

AI in Education: AI-based systems for personalized learning, adaptive assessments, and intelligent tutoring.

AI in Finance: The use of AI algorithms for fraud detection, risk assessment, trading, and portfolio management in the financial sector.

AI in Healthcare: Applications of AI in medical diagnosis, drug discovery, patient monitoring, and personalized medicine.

Robotics: The integration of AI and robotics, enabling machines to perform physical tasks autonomously.

Explainable AI: Techniques and methods for making AI systems more transparent and interpretable

Reinforcement Learning: AI agents that learn through trial and error by interacting with an environment

Computer Vision: AI systems capable of interpreting and understanding visual data.

Natural Language Processing: AI techniques for understanding and processing human language.