While you and your team enjoy the productivity benefits of AI tools like ChatGPT, Perplexity and NotebookLM…
Cyber criminals are using AI to hack into the systems of business, large and small.
Your prompt is,
“Hey ChatGPT, please write me a LinkedIn post on X topic.”
Their prompt is,
“Hey ChatGPT, please write me a phishing email to send to {{contact.first_name}}.”
Well, not quite, but you get the idea.
Here are the most common types of AI-enabled cyber attacks.
AI-driven Social Engineering
Sophisticated cyberattacks using AI algorithms to identify targets, develop fake personas, create plausible scenarios, and generate personalized content to manipulate human behavior for accessing sensitive data or systems.
AI-driven Phishing
Uses generative AI to create highly convincing communications across multiple channels, including AI-powered chatbots that can engage in real-time conversations at scale while posing as legitimate service agents.
Deepfakes
AI-generated deceptive media (video, image, or audio) used in cyberattacks and disinformation campaigns, capable of mimicking real people to instruct targets to perform specific actions.
Adversarial AI/ML
Attacks aimed at compromising AI systems through three main methods: poisoning attacks (corrupting training data), evasion attacks (manipulating input data), and model tampering (altering model parameters).
Malicious GPTs
Modified AI language models designed to produce harmful outputs, capable of generating malware and fraudulent content to support cyberattacks.
AI-enabled Ransomware
Enhanced ransomware that uses AI to improve target selection, vulnerability identification, and self-modification capabilities to evade detection by security tools.
What can you do?
Besides following common security best practices, the best thing you can do is use AI-enhanced cybersecurity.
I have some ideas on how we can better protect your business.
Let me know if you’d like to chat about them this week.