Wednesday, May 6, 2026

How to Use AI for Cybersecurity Without Creating New Risks in 2026

AI is the most significant capability change in defensive security since endpoint detection and response emerged as a category. My experience over the past two years is that the organisations getting the most value from AI security tools share a common characteristic: they defined measurable success criteria before deployment, not after. The organisations I work with that are getting the most value from AI security tools share a common pattern: they deployed AI to augment existing capabilities rather than replace…

Read full article →

LLM04 Data Model Poisoning 2026 — Corrupting AI From the Training Phase | AI LLM Hacking Class Day 8

🤖 AI/LLM HACKING COURSE FREE Part of the AI/LLM Hacking Course — 90 Days Day 8 of 90 · 8.8% complete ⚠️ Authorised Research Only: Data poisoning and backdoor testing involves modifying training pipelines and testing model behaviour under adversarial conditions. All exercises use controlled environments — your own models, your own training runs, or academic research datasets. Never introduce poisoned data into production training pipelines or third-party model repositories. SecurityElites.com accepts no liability for misuse. A researcher at a…

Read full article →

What Does AI Know About You? More Than You Think 2026

Every conversation you have with an AI assistant is potentially stored, analysed, and used to improve the model you're talking to. Beyond that, the AI companies building these tools are part of broader ecosystems — Google, Microsoft, Meta — that have been building detailed profiles of you for years. What AI systems actually know about you depends on which tools you use, which accounts they are connected to, and whether you have ever changed the default settings. Here is the…

Read full article →

Tuesday, May 5, 2026

Can AI Write Malware? What the Research Shows — And What Defenders Must Know (2026)

Yes — AI tools can assist in generating malicious code, and security researchers have been documenting this capability since 2022. My assessment after tracking this research closely: the threat is real, the defensive adaptations are working, and the honest picture is more nuanced than most headlines suggest. The important nuances: what AI produces still requires human expertise to weaponise effectively, existing defences are adapting, and the documented threat looks different from the sensationalised version in headlines. Here is what the…

Read full article →

Is AI Watching You? How AI Surveillance Works in 2026

Yes — AI systems are collecting, analysing and making decisions about you right now. My assessment after years of working in security and privacy: the reality is more targeted and more consequential in specific areas than the "AI is watching everything" narrative suggests, and less science-fiction in others. Some of this is legal, transparent, and something you agreed to. Some of it is invisible. The honest picture is more nuanced than either "AI is watching everything" or "you have nothing…

Read full article →

ChatGPT vs Gemini vs Claude Security Comparison— Which AI Is Safest to Use in 2026?

All three are excellent AI assistants. But "which is best" and "which is safest" are different questions with different answers. I use all three professionally — in security assessments, in research, and in client work. My evaluation here isn't about which writes better poetry — there are thousands of articles doing that comparison. It's about data retention policies, breach history, jailbreak resistance, what each company can see from your conversations, and which plans offer meaningful privacy protections. Here is the…

Read full article →

What Is an LLM? Large Language Models Explained for Security Teams 2026

Every serious security topic in 2026 eventually requires understanding what a large language model actually is. Prompt injection, jailbreaking, model theft, adversarial inputs, hallucination exploitation — all of these attack categories only make sense once you understand the underlying architecture. My goal in this guide is to explain LLMs the way I explain them in security briefings: technically accurate, practically focused, and without the machine learning PhD prerequisites. If you understand how LLMs work, you understand why they're vulnerable in…

Read full article →