We use cookies to personalize content and to analyze our traffic. Please decide if you are willing to accept cookies from our website.

Articles by Tag: LLM

Four Key Strategies to Mitigate AI Supply Chain Risks from Large Language Models

Four Key Strategies to Mitigate AI Supply Chain Risks from Large Language Models

The rapid integration of large language models (LLMs) into AI applications brings significant benefits but also introduces several supply chain risks. Developers and security experts using LLMs must understand AI supply chain risks and know how to mitigate them effectively.
Locking down LLMs to Combat Jailbreaks

Locking down LLMs to Combat Jailbreaks

LLM jail-breaking (also known as LLM manipulation) forces LLMs to exhibit unwanted behavior. These LLMs may become examples of irresponsible and unethical AI, depending on what they are forced to do. Cybersecurity teams can ensure that their LLMs are responsible and ethical through resilience testing for jailbreaks and implementing multiple guardrails to combat jailbreaks.