We use cookies to personalize content and to analyze our traffic. Please decide if you are willing to accept cookies from our website.

Articles by Tag: llm

Small Enterprises, Big AI: How to Use LLMs to Stop Financial Fraud

Small Enterprises, Big AI: How to Use LLMs to Stop Financial Fraud

Traditional fraud detection methods are resource-intensive and difficult to implement and maintain. Leveraging large language models (LLMs) offers a more efficient approach. LLMs can analyze vast amounts of data in real time, identifying fraud with less complexity. CIOs at SMEs should consider incorporating LLMs into their fraud detection systems to strengthen security while simplifying operations.
The LLM Takeover: AI For All

The LLM Takeover: AI For All

AI is becoming a necessary software feature for vendors to stay relevant and ahead of their competition. One major issue with AI in software is the trust that your business data is private and protected. Without this trust, your data could be used by your software vendor or third parties to train their AI models. This article discusses how to manage software with AI to protect your data.
Just Cache It (Part 2): Prompt Caching vs RAG

Just Cache It (Part 2): Prompt Caching vs RAG

Businesses are continuing to enhance their efficiency by using AI. This increases the need for LLMs that perform well on enterprise tasks. Fine-tuning is not a viable method because it is costly. Prompt caching (context caching) and Retrieval-Augmented Generation (RAG) are more suitable. AI engineers should read this article to learn more about these two methods to create cost-effective LLMs that perform well on their enterprise data.
Just Cache It (Part 1): Maintaining Context with APIs and LLMs

Just Cache It (Part 1): Maintaining Context with APIs and LLMs

It has become easier to create AI applications due to the ease of integration by using APIs. High cost is one challenge when frequent API calls are made to LLMs with similar content to add context. Prompt caching, or context caching, creates a cache to solve this challenge. AI engineers must use prompt caching to decrease inference fees and reduce latency.
EU Regulations, Technologies, AI Realities and Cyber Risks: 2024 Tech Insights

EU Regulations, Technologies, AI Realities and Cyber Risks: 2024 Tech Insights

2024 saw significant shifts in technology, with the EU's AI Act and DMA impacting businesses alongside the rise of modular laptops and the persistent threat of cyber attacks. This review highlights some of the developments that interested IT leaders. This list suggests CIOs and IT executives should continue to prioritise compliance, evaluate new technologies, and strengthen cybersecurity in 2025.
Navigate the Technology Trends of 2025 – Compliance

Navigate the Technology Trends of 2025 – Compliance

From the EU AI Act to emerging state-level AI laws in the US, 2025 promises heightened scrutiny and demands on IT systems. Organizations must adopt forward-thinking strategies, leveraging emerging technologies like LLMs and governance tools, to navigate this terrain effectively. CIOs should prioritise proactive compliance measures to safeguard operations and maintain competitive advantage.
Four Key Strategies to Mitigate AI Supply Chain Risks from Large Language Models

Four Key Strategies to Mitigate AI Supply Chain Risks from Large Language Models

The rapid integration of large language models (LLMs) into AI applications brings significant benefits but also introduces several supply chain risks. Developers and security experts using LLMs must understand AI supply chain risks and know how to mitigate them effectively.
Locking down LLMs to Combat Jailbreaks

Locking down LLMs to Combat Jailbreaks

LLM jail-breaking (also known as LLM manipulation) forces LLMs to exhibit unwanted behavior. These LLMs may become examples of irresponsible and unethical AI, depending on what they are forced to do. Cybersecurity teams can ensure that their LLMs are responsible and ethical through resilience testing for jailbreaks and implementing multiple guardrails to combat jailbreaks.