Recent security testing shows LLM deployments fail in different and often unexpected ways. Over 40% of models are vulnerable to prompt injection, more than half are susceptible to poisoned retrieval data, and multi-agent architectures open trust-based attack paths that are still poorly understood. The numbers get worse from there. A joint study by researchers from OpenAI, Anthropic, and Google DeepMind found that adaptive attackers bypassed all 12 tested LLM defenses at success rates above 90%. This implies that most defenses, as currently designed, do not hold up against a determined adversary. LLM firewalls, security layers that intercept and filter traffic between users and LLMs, have emerged as a defence-in-depth response. The market is still immature with vendors using the same label for products that vary significantly in what they actually do. For CIOs and CISOs, the question is no …