top of page


The Trojan Horses of AI
As enterprises rush to adopt AI and LLMs, two critical threats are often overlooked: training data poisoning and insecure plug-ins. Poisoned data can bias models, bypass safety filters, and embed harmful outputs. Insecure plug-ins expand the attack surface, enabling data leaks or malicious prompt injection. These aren’t theoretical risks—they're happening now. Secure your AI pipeline from the ground up, or risk building intelligence on a compromised foundation.

Glen Thomas
Jul 22 min read
Â


System Prompt Leakage
One of the risks climbing fast up the AI security charts in 2025 is System Prompt Leakage (System PLeak). Organizations need to be more proactive than ever when managing and securing data and implementing a secure framework when developing AI tools.

Glen Thomas
Jun 192 min read
Â
bottom of page