top of page
Pillar Group Advisory


The Agentic AI Security Challenge
When AI systems move from generating words to independently accessing systems, the security implications shift dramatically. A single compromise can cascade across business-critical systems in ways that conventional security controls were never designed to handle. Organizations deploying agentic AI need to implement security and governance frameworks from the start.

Glen Thomas
Oct 311 min read
Â


The Trojan Horses of AI
As enterprises rush to adopt AI and LLMs, two critical threats are often overlooked: training data poisoning and insecure plug-ins. Poisoned data can bias models, bypass safety filters, and embed harmful outputs. Insecure plug-ins expand the attack surface, enabling data leaks or malicious prompt injection. These aren’t theoretical risks—they're happening now. Secure your AI pipeline from the ground up, or risk building intelligence on a compromised foundation.

Glen Thomas
Jul 22 min read
Â


System Prompt Leakage
One of the risks climbing fast up the AI security charts in 2025 is System Prompt Leakage (System PLeak). Organizations need to be more proactive than ever when managing and securing data and implementing a secure framework when developing AI tools.

Glen Thomas
Jun 192 min read
Â
Expert analysis, industry insights and latest news.
bottom of page