Threat actors can potentially exploit ChatGPT to generate convincing phishing emails or deceptive content encouraging users to download malware.
They may also use the model to obfuscate malicious code or to assist in social engineering attacks, making it more challenging for security systems to detect and prevent illicit activities.
Source for further information: ChatGPT-Powered Malware Attacking Cloud Platforms (cybersecuritynews.com)