top of page
Pillar Group Advisory


Agentic AI is here - how strong is your governance?
Agentic AI is everywhere. When deployed effectively, it can detect known and emerging threats and trigger countermeasures in near real time. But there are several challenges associated with agentic AI that are keeping your CISOs up at night. Learn what their concerns are and what you should be doing to ensure your organization has appropriate security and governance frameworks built into your agentic AI systems from day one!

Glen Thomas
Nov 42 min read
Â


The $670,000 Shadow
Shadow AI - unauthorized tools deployed by employees without oversight - isn't a future threat. It's here already, and the costs to organizations of not addressing this threat are staggering. Here's what your board members and c-suite should be asking themselves at their next meeting and what you need to be doing as an organization to address shadow AI while you still have time to get ahead of it.

Glen Thomas
Nov 35 min read
Â


The Agentic AI Security Challenge
When AI systems move from generating words to independently accessing systems, the security implications shift dramatically. A single compromise can cascade across business-critical systems in ways that conventional security controls were never designed to handle. Organizations deploying agentic AI need to implement security and governance frameworks from the start.

Glen Thomas
Oct 311 min read
Â


Shadow AI
Embedding governance into your AI strategy not only mitigates risk but also enables secure innovation and stakeholder confidence. That's why Mission+ and Pillar Group Advisory have developed a Shadow AI Governance methodology; blending security, compliance and enablement.

Glen Thomas
Sep 251 min read
Â


The Trojan Horses of AI
As enterprises rush to adopt AI and LLMs, two critical threats are often overlooked: training data poisoning and insecure plug-ins. Poisoned data can bias models, bypass safety filters, and embed harmful outputs. Insecure plug-ins expand the attack surface, enabling data leaks or malicious prompt injection. These aren’t theoretical risks—they're happening now. Secure your AI pipeline from the ground up, or risk building intelligence on a compromised foundation.

Glen Thomas
Jul 22 min read
Â


System Prompt Leakage
One of the risks climbing fast up the AI security charts in 2025 is System Prompt Leakage (System PLeak). Organizations need to be more proactive than ever when managing and securing data and implementing a secure framework when developing AI tools.

Glen Thomas
Jun 192 min read
Â
Expert analysis, industry insights and latest news.
bottom of page